Jump to content
IGNORED

Apple II in low-end Market?


kool kitty89

Recommended Posts

Apple has always had good quality products, but overpriced. I had an Apple //e, and later IIgs, and they were great machines. The IIgs was purposefully given a low 2.8 mhz speed as not to compete with Macs. So the IIgs was sort of a dead end machine. I would have preferred it if Apple had continued the Apple II series, without any Lisa/Macs in the picture.

Yes, that's more or less my premise (though I commented more on how they might have been best off continuing with the Apple II -especially expanding into a broad range of machines).

 

The IIGS was a dead end due to the mac . . . it's actually surprising they managed to get it to market with all the internal opposition. (they should have been making incremental upgrades much earlier in the Apple II's life, and heavier investment in consolidation -aside from the expanded business model to expand their market to real mainstream status -as it was, no computer really went mainstream until the C64 followed by PC)

 

 

 

 

 

Going back to what kool kitty 89 said in post #28 - The Atari 8-bit units had far more integration than the Apple II.

The sound and graphics chips alone, on the 400/800, were totally optional on the Apple II, II+, & //e. The Atari units had SIO and plenty of input options with the 4 joyports. That would be 8 A/D converters + 20 switches for 4 joysticks. RF modulator too.

 

All that was optional on the // series, at great cost!

Exactly, yet all of that (in terms of actual logic) was far simpler than on the A8. More than 2 joyports was unnecessary (one 1 Atari game ever used that in a useful manner -Asteroids), added sound was only supported in a few games and added video capabilities never were AFIK. (the IIe onward had built-in doublehighres support, and some games used artifacting for 140x192 in 16 colors -most games used 140x192 pseudo 4 colors)

 

With even less aggressive integration than the A8, Apple could have had a highly cost effective machine with an established market to build on (and expand from the niche mid-range market to higher and lower end ranges while linking them with flexible expandability).

 

Having technically better hardware is rarely the most important factor, it's all about getting support and market recognition (which Apple had in spades by the end of the 1970s, Tandy had a good chunk of that too, but they also failed to push a flexible standard built on the original TRS-80).

 

 

For the low-end, again, I was thinking of a baseline 16-32 machine with plain Apple II video and beeper support, plus one or 2 built-in joytports and a cut-down form factor relying on a simple external expansion port (with a full chassis separate like the 1090XL), albeit one could argue including an AY8910 standard (especially if using its I/O to add 2 digital joyports) could have been a good move too.

Think somewhat along the lines of the CoCo. (technically rather similarly bare bones as the Apple II . . . though better in many respects -better CPU 6-bit, DAC rather than 1-bit toggle, more built-in peripheral interfaces, and even flexible expansion via the cart slot -albeit no model with built-in expansion slots . . . so you'd have something like the CoCo in cost/capability, but with a very strong established market to build on -it would have been rather interesting if Tandy had developed an Apple clone in place of the CoCo . . . or a TRS-80 compatible for that matter -the TRS-80 was a pretty big competitor early on, not as much prestige as Apple btu significantly cheaper and still a decent feature set for the time -not sure which actually sold better from '77 to 1980)

 

 

Plenty of options on the technical end, but the main thing was just consolidation and the much more important non-technical issues of actually expanding and evolving their rather limited business model. (same thing for Atari; massive potential lost on the A8 due to lack of the right business management and marketing . . . and some technical issues too -mainly due to management decisions like making the A8 purely an "appliance computer" -ie closed box with idiot proof smart peripherals and no comprehensive low-level expansion -aside from RAM on early models)

 

 

 

With Apple, you had a REAL computer, one of the early market leaders and with the potential to become a standard like IBM ended up later pushing . . . unlike Atari or several others, they (or rather Wozniak) got the flexible expansion down from the start and the only think lacking was good business/management. (the Apple II seems to have done well more in spite of Apple's management than because of it)

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

I believe it is compatibility and pervasiveness throughout the market that enables any one standard to succeed :roll: Compatibility will trump performance when it comes to needing a large number of cheap installations. Compatibility is important for getting something adopted throughout a culture.

Yep, that's why the Apple II was such a wasted opportunity . . . it has all the features that allowed the PC to become a success (open expansion architecture, respect in the industry, wide range of software support including business applications, etc), but lacked the management to pull it off. (to the contrary in the extreme given Apple management virtually tried to kill off the II line at the beginning of the 80s)

 

Others made the same mistake too . . . not even considering PC compatibility (be it before or after PCs really became a standard) you had plenty of cases where machines had a total lack of features promoting flexible standardization through open-box expansion, let alone going a step further and taking advantage of that flexibility to establish an intercompatible breadth standard of computers. (Atari and Commodore could both have benefited there . . . Tandy did sort of have open expandability too, but lacked in other areas -technical and business wise, though the VIC and C64's cart slots also offered some pretty flexible potential for expandability)

 

The Amiga and ST also had those problems. The ST was totally closed box until the MEGA (even then rather limited), and while the Amiga had somewhat better expansion, both lacked a wide range of machines until the late 80s.

 

The "PC" could handle so many file formats, and hardware seemed to be everywhere, and it could interface to everything. Some of those points existed in the II series, but were never really really pushed or developed. Had Apple really really marketed the II with vigor, it would be the "PC" of today.

Yes, exactly . . . though if others (Tandy, Atari, CBM) had pushed direct competition on those grounds, it might have been a different story.

 

As I brought up several times in this thread: http://www.atariage.com/forums/topic/181398-atari-and-microsoft-and-the-st/page__st__25

 

File format and software feature cross-compatibility could have made mid/late entries into the computer market more realistic competitors in the US (Europe was still more open obviously). In particular, I suggested that Atari (or CBM) could have promoted PC compatible file and disk formats and heavily encouraged 3rd party developers to cater to PC standardized file formats as well (for text, images, spreadsheets, etc).

If they could successfully manage that, there could have been real direct PC alternatives on the US market with comprehensive standardized support on top of generally much better (and more cost effective) hardware and OS performance. (Atari or CBM also promoting a wide range of machines with open expandability would also have been very significant, not to mention licensing the standard to further extended the standard's pervasiveness -especially for the ST where you'd likely end up with tons of unlicensed clones eventually anyway, so might as well capitalize on that while you can -and possibly delay unlicensed clones)

 

I sort of wanted an ST, off and on. But it was too expensive at the time considering I was spending money on the Amiga. Now, for all its hardware strengths and whiz-a-bang graphics the Amiga almost had a special appeal, But...but... I found it frustrating to use and "too much" got into the way of what I wanted to do at the time. And that was simple word processing. And I mean simple. A task, which today, is handled by Notepad/Word with aplomb.

In that respect, didn't the ST actually have a significant edge over the Amiga? (especially on the OS end -significantly less powerful and flexible, but also much simpler, stable, and user friendly -especially compared to early versions of Amiga OS)

 

I always felt the Amiga was full of hot air and grandiose promises. The ST to a lesser extent. So many cool things to do, but everything was so conditional, you had to have this this and this.

PC was (and is) the same way, a lot of things requiring added hardware . . . except the open expansion architecture makes that a lot easier. (no hacking/modding and no buying a new machine -up to a point- albeit later Amiga models had some decent plug-in expansion support -later high-end box models had a lot of internal expansion too)

 

It seemed with the Amiga I would need to have spent something extra-in-addition to get that level of versatility and functionality. Either buying a hardware converter, memory expansion, buffer, especially extra software. Yes, that's it, the extra software. Extra drivers. It was not an easy task. Despite the apparent crudeness, and yet simplicity, of the Apple II, it worked.

Again, that seems to have been much less of an issue on the ST. (or even PC to some extent . . . though driver issues became more serious as software become more complex -especially once you hit extended memory management)

 

I wonder just how much of that was actually due to the Amiga's complicated OS . . . or how much would have been resolved if the OS had been much simpler from the start and built upon more gradually.

 

What else was nice, real nice, was before I got a real word processor program. I was able to use a text editor built into either Ascii-Express, or wait, no, it was Pro-Term I think, yes. I could do basic WP on that. That was nice. I still have original text files we wrote up on that!

Amiga OS didn't have a basic built-in text editor??? (or GEM/TOS for that matter)

 

The Amiga was good for static painting and artwork. Very good. And I learned a lot about graphics images and electronic painting and stuff. But when it came to video, well that was hell, for me. I got that Digi-View thing. And rented a Vidicon or Saticon camera to digitize some photos or something. Well I had to spend MORE money and get the right cables, a gender-changer, a stand, lights, hardware for mounting a color wheel, extra memory, another disk drive, power supply for the lights, flat glass plate to flatten the picture. All that stuff + it took like 4 minutes time to grab an image. Ridiculous even back then in the 90's! And even more money had to be spent on getting an image from the VCR to disk. That it was pissed me off about the Amiga. It made the promises, but in actuality it was a huge pain in the ass to work with. Tedious, slow. I'd rather have waited another 3 years or so when digital cameras started coming to the consumer level. Far superior in every way. But from the Amiga marketing and advertising material they made things look so easy! Bullshit.

On the video end of things, wasn't the biggest advantage of the Amiga just the analog genlock? (in particular, you could use that to do analog video editing with digital special effects without ever having to digitize video -you rendered the digital effects and then applied them to the genlocked analog video source which then looped out to a 2nd generation tape recording)

Actual digital video editing would have been hell until the early/mid 90s (and even then you needed really high-end workstations)

 

Now that I think about it, it was the Amiga (and some multi-media PC experiences of the time) that turned me off from being an early adopter. With the PC, it was the promise of 3-D accelerators and soulless FMV cut-scenes. First now, yes, First now we're seeing GPU's worthy of even the name "GPU"..

True GPUs didn't arrive until the 2nd or 3rd generation (ie high-end late 90s accelerators). Prior to that, everything was basically advanced blitters (the "GPU" in the PSX is also a glorified blitter . . . the GPU in the jaguar was not).

Actually, any video card making use of a TMS340 series coprocessor would have had a genuine GPU (in the technical sense).

All those other early accelerators (even 3DFX's stuff) required a good amount of CPU assistance. (when you started seeing "hardware transform and lighting" being advertised, that's when you had "real" GPUs arriving ;))

 

Besides that, PC 3D was pretty damn impressive without3D accelerators, though the push for accelerated 3D and multimedia (MPEG1/2/etc) certainly made an even more dramatic shift for the late 90s. (there's a ton of awesome late 80s and early 90s PC stuff out there though, including 3D and multimedia stuff) You had that early quirky FMV stuff -some good, some bad, some depending more on personal taste- and then you had more and more detailed drawn/digitized/GC art and buffer animation -evolution of Monkey Island and the like- as well as 3D and multimedia moving forward from the likes of Wing Commander I to WC II -with dialog- to X-Wing -especially the 1994 CD-ROM release with improved animation and full speech though it only used about 10% of the disc capacity- then Tie Fighter, and then -perhaps the biggest milestone in the industry: Wing Commander III, one of the first (if not the very first) true multimillion dollar budget games with really high production value through and through from high-end workstation quality prerendered 3D animation to the high quality live action cast to the more dynamic use of multimedia in-game (animation and speech) and, of course, fully texture mapped polygonal 3D. (that $4 million budget still paled in comparison to WCIV's 12 million, let alone the massive push for high-end games following that -in part spurred by Sony)

Of course Myst was also a big part of the multimedia revolution, but that didn't use it in the manner that has become so distinctive since then. (use of cutscenes and in-game elements to tell a story and let you -more or less- play out a movie or story -and not in the manner of the stereotypical "interactive movie" though WCIII and IV were indeed advertised as interactive cinema)

 

Of course, the push for high-end game development did have some added consequences, but it brought a lot of amazing stuff to the world of video games as well. (plus, today with the rise of lower-end downloadable content, you see a lot of that small developer spirit coming back, though some genres are more or less)

 

I also would hardly call early multimedia soulless . . . in fact, some of that early quirky stuff had some of the last examples of more traditional "classic" US gaming style. (especially for the adventure games, though that has also started to come back recently -including that classic element of humor that was embedded in many of the best classic adventures -still, I doubt Zork will ever be revived)

 

 

This is totally off topic now though. ;)

 

One more thing with the Amiga, for all its purported expandability claims, it felt like a closed-off system. Any expansion seemingly needed to have to be developed with far more resources than what could be done on the //e. Though I think that became prevalent on a lot of the 16-bit systems of the time. Perhaps, but what about the complexity of the 1541 drives? Didn't those have ram and rom and a cpu going? Why?

More resources on newer machines was commonplace though, especially as programmers had more resource to work with (ie getting sloppy . . . albeit that's better than not supporting a system at all, which is one of the alternatives ;)).

 

Or are you talking about the drive itself having RAM/ROM/CPU (or MCU rather)? In that case, it's purely up to the interface to the computer. (after all, Atari and CBM disk drives had CPU+RAM+ROM built into their drives -in CBM's case, they also had the DOS in the drive, but Atari only used the controller for SIO iirc -an embedded controller is necessary for serial interfacing as I'm sure you know, though at the time, an embedded controller might be about as powerful as the system's main CPU ;))

Edited by kool kitty89
Link to comment
Share on other sites

Nice posts. Enjoying this discussion.

 

One thing from the above that struck me... A Apple style computer, with a 6809 in it? Would have been sweet, and the 09 could have been clocked like the 02 was, so the video DMA doesn't get in the way of the CPU... With all the slots and such, a machine like that could have punched well above it's weight, as seen in the CoCo 3, and the various things done through the cart port.

 

...and the 140x192 graphics were pseudo 6 colors, in all machines but the early rev 0 boards, which did not include the phase shift capability. Apples had the first "color cell" kind of feature, where clashing occurs on byte boundaries, due to color definitions not being pixel addressable.

 

Re: "Pseudo colors" I've thought about that, and I really can't see the distinction. On the Apple, those were real colors, because the video circuit was designed to make artifacting consistent, and the phase shift bit really did provide some color control. It's a hack, compared to other devices that would output a proper color signal, but then again, that machine was kind of filled with hacks. On machines that do output a proper color signal, maybe artifacts are "pseudo colors", as they are actually some artifact, not intended in the design. On the Apple though, Woz is on record saying how he saw the color signal as a "rainbow", realizing all he really had to do was drop a pixel in the right place to get the color. So, that's just real color, done real simple. :)

Link to comment
Share on other sites

A tiny correction, the //e did not really have built-in Double-Hires Graphics. It required the additional 64k-ram/80column card. When you added in the extra memory and a few extra gates, THEN the //e could do DHR modes. See? That's the neat thing, you got extra memory, 80 column capability, and DHR stuff. 3 things in one. Though perhaps not all usable at the same time. There were generally 2 types of AUX-slot cards, the 80-column card, and the combo 64k/80column.

 

The 6502 RWTS controlled everything on the DISK ][ drive. The currently running program would halt and stop while the 6502 stepped the motor, read the data, and placed it in memory. There was no underpowered (for the time) DMA going on. If you're going to do DMA, make sure your peripherals and system architecture can handle it. In a sense, the 6502 in the II BECAME the DMA controller. I believe the Apple Disk ][ units were faster than the 1541's and 810's. But someone would have to check that for me.

 

I remember digitizing photos and videotape on the Apple II, Amiga, and PC. All around the same time frame, more or less.

With the Apple II, I didn't have to worry much about file conversion.

With the Amiga, I had to convert stuff back and forth between applications like paint programs and print programs.

With the IBM PC, most other applications seemed to accept the varied formats right from the get-go, or they came with easy-to-follow directions and converters.

 

With the IBM PC format, there were all these EMS and XMS boards (now I use dosbox for old gaming), and all these memory management utilities. Ok. You see, all these confusing standards, BUT, all you had to do was wait a few months after something new came out, and the good stuff floated to the top while the crap stuff tended to wash away. This was still the time when "the guy at the computer store" knew what he was talking about. And you could get solid advice by asking what was compatible and what was working well in the field. I always got cartoony answers when I asked similar styled questions about the Amiga. Gosh! And that "intuition workbench"operating system, way overcomplex! Things just seemed to "happen" on the IBM and the Apple II series. Demos worked, and things seemed much less conditional.

 

I also believe the 3-piece construction was a huge, but largely unrecognized, contributing factor. It just happened. 3 pieces + maybe-a-printer.

 

The keyboard, the cpu box, monitor..This allowed for great ergonomics and more customization. Many of the early 8-bitters had to have their disks outside the cpu box. And the keyboard attached. The other way around disks in the cpu box and keyboard a separate piece - made for just so much better configurations and made the computer infinitely easier to place into many MANY MANY workspaces.

 

The tight engineering in the Apple II and the Atari 2600 is what allowed only the best and brightest of programmers/engineers to work with the platform. There was little room for incompetence. And if you slacked, it would show up almost immediately to the point of un-usability, and naturally got itself filtered out. This "way-of-working" remained with the 400/800 and C-64, and the IBM-PC. But, today we have so much computing power, we can hide a 400LB fatman in a size 3 dress! And make it look good. That doomed the Amiga, too much computing capability that remained unfocused and bloated away by overly complex software.

 

A brief touch on expansion, much of the Apple II expansion options, and IBM options fit into the main case, somehow. If it was outside the case then that meant it meant to be played with and handled by a user. If you look at the Amiga, they had processors hanging off the side, hard disks hanging off the side. Modems too, same thing. The IBM and Apple could have that stuff installed *INSIDE*. Sure there were external drives, modems, for the II and PC, but you had options! In or out.

 

This always gives me a laugh! Pray tell how you'd run this in professional workspace, or even at home with kids bouncing off the walls? I don't even know what some of this stuff is..

 

post-4806-0-90868300-1305950528_thumb.jpg

Edited by Keatah
Link to comment
Share on other sites

Yeah, there is a lot to be said for "in the case" expansion. I think the mere idea that is going to happen, leads to the kind of planning that allows for expansions to become a real part of the machine, and not some external device.

 

The difference is often subtle, like with disk drives. It can be profound too, like the CP/M card, or the many odd video / CPU cards for the Apples.

 

Been doing a lot of reading from this book, "Understanding the Apple //e" There is a similar one, by the same author for the //+ What a great resource!

 

DMA as implemented is very clever, given the limited design. Also appreciate how the video was done in parallel with the CPU. This was done on the CoCo as well, and to me, it's a great feature, though it does complicate things like DMA. On the flip side, it's a serious speed boost, which matters on the Apple, as it's running at 1Mhz.

 

Anyway, I can see how excessive DMA, or multiple sources for it, could cause big trouble, if not managed carefully. Some cards would be mutually exclusive, or not perform, or clash with others. Still, it was nicely done for the time, with a peripheral card being able to do most anything the CPU could. That's the difference in the Apple that I find most distinctive, because it clearly set the machine apart from the more integrated designs with custom chips.

 

...and the most anything was actually fixed in the //e with the auxiliary slot, where some signals and busses not present in the original slots are there. Integrated video can be done out of that slot, though one would have to also do the memory expansion there too.

 

Laughing big at the TI picture, though to be fair, one could get the big ass expansion chassis and consolidate all of that a little, like the CoCo offered with it's multi-pak expansion. Had one of those for a while. Nice machine actually. Wish I still had it, but that expansion was huge, heavy, etc... The Apple, by comparison, is fairly clean and neat, even if it's got a few wires connected here and there under the hood. Nobody had to know. :)

 

The software disk proved to be useful. Yeah, no DMA, but it was fast. Notably faster than the Atari I had at home at the time. All the bizzare copy protect methods were intriguing, as were options to expand the disk size, and do all sorts of things with the drive. I think it was a smart move at the time.

Edited by potatohead
Link to comment
Share on other sites

This always gives me a laugh! Pray tell how you'd run this in professional workspace, or even at home with kids bouncing off the walls? I don't even know what some of this stuff is..

 

post-4806-0-90868300-1305950528_thumb.jpg

Computer, speech synth, printer, RAM expansion, serial interface, disk interface, disk drive, modem.

:D

Link to comment
Share on other sites

A tiny correction, the //e did not really have built-in Double-Hires Graphics. It required the additional 64k-ram/80column card. When you added in the extra memory and a few extra gates, THEN the //e could do DHR modes. See? That's the neat thing, you got extra memory, 80 column capability, and DHR stuff. 3 things in one. Though perhaps not all usable at the same time. There were generally 2 types of AUX-slot cards, the 80-column card, and the combo 64k/80column.

Yeah, it says nothing about the computer itself, but a lot about the suppliers of expansion hardware. ;) (well, nothing about the computer other than that it has the facilities for that expansion ;) -in spite of Apple themselves often intentionally quashing that flexibility . . . but that's where 3rd parties come in :P)

 

That's the point of expansion, no matter how limited a system is, with flexible enough expansion support, you can do a LOT with it . . . given the right product support for those expansion modules. ;) (and how you get that support varies . . . after all, it took the PC almost 7 years to get a sound card of ANY type . . . prior to the release of Adlib in 1987, there was nothing but buying a Tandy with built-in PSG or a bare 8-bit DAC for the parallel port -actually really odd that the PC didn't get something like the mockingboard in the early 80s, or even the mid 80s -of course, immediately following Adlib was a wealth of other consumer and professional sound systems -CMS/gameblaster, Sound master, MT-32/LAPC-I, IBM Music Feature card, Sound Blaster, etc, etc)

 

The 6502 RWTS controlled everything on the DISK ][ drive. The currently running program would halt and stop while the 6502 stepped the motor, read the data, and placed it in memory. There was no underpowered (for the time) DMA going on. If you're going to do DMA, make sure your peripherals and system architecture can handle it. In a sense, the 6502 in the II BECAME the DMA controller. I believe the Apple Disk ][ units were faster than the 1541's and 810's. But someone would have to check that for me.

Of course it was many times faster than the 1541, but that's not saying much at all since tar is faster than that. :P

I seem to recall figures showing the Apple II's parallel floppy drive also being somewhat faster than Atari's drives, but it's unclear. (there's also the fact that the default baud rate of the Atari drives is not the technical limit . . . SIO peaks at ~127 kbps, and while the disk drives can't manage that, they can go much faster than the 19.2 kbps default speed -which is already many times faster than the 2.4 kbps of CBM drives- with fastloaders pushing drives close to 70 kbps -which means an updated version of DOS should also have been capable of that; the C64 had fastloaders pushing things into the 20-30 kbps range -though many less than that- and all of those had to bypass the built-in DOS/ROM loading program -Atari's DOS was loaded from disk, so you's have open ended update potential -you just needed the support for that, just like any sort of hardware or software expansion support ;))

 

I remember digitizing photos and videotape on the Apple II, Amiga, and PC. All around the same time frame, more or less.

With the Apple II, I didn't have to worry much about file conversion.

With the Amiga, I had to convert stuff back and forth between applications like paint programs and print programs.

With the IBM PC, most other applications seemed to accept the varied formats right from the get-go, or they came with easy-to-follow directions and converters.

You can't comment on the ST then? (I don't know a huge amount about the ST's software in general, but it at least seemed to lend a few aspects of PC compatible formatting from the sheer use of GEM -the floppy data format was directly compatible for one thing, plus the actual hardware formatting of the disks was also the standard 80 track DD formatting of PCs)

 

With the IBM PC format, there were all these EMS and XMS boards (now I use dosbox for old gaming), and all these memory management utilities. Ok. You see, all these confusing standards, BUT, all you had to do was wait a few months after something new came out, and the good stuff floated to the top while the crap stuff tended to wash away. This was still the time when "the guy at the computer store" knew what he was talking about. And you could get solid advice by asking what was compatible and what was working well in the field. I always got cartoony answers when I asked similar styled questions about the Amiga. Gosh! And that "intuition workbench"operating system, way overcomplex! Things just seemed to "happen" on the IBM and the Apple II series. Demos worked, and things seemed much less conditional.

Again, that seems to be the de-facto pro-GEM/ST argument over the Amiga on the technical end: just simpler and easier to use (less feature rich, but much less complex by that same virtue, and more stable -and giving more concrete error messages when it bombed rather than the confusing guru messages)

 

Even (reasonable) Amiga fans seem to recognize that. (even if THEY prefer the Amiga OS's features, the recognize how simplicity could be preferable to many others)

 

The keyboard, the cpu box, monitor..This allowed for great ergonomics and more customization. Many of the early 8-bitters had to have their disks outside the cpu box. And the keyboard attached. The other way around disks in the cpu box and keyboard a separate piece - made for just so much better configurations and made the computer infinitely easier to place into many MANY MANY workspaces.

Yes, the Apple II was at a disadvantage there too . . . prior to the IIgs you only had the keyboard/console formfactor, no "proper" desktop configuration.

 

That was one of the ST's problems too, though they tried to make it look like that with the dual disk drives with monitor on top configuration. (odd that they didn't have a desktop machine right from the start given that the keyboard interface was obviously designed to be configured for an external unit -it has a dedicated 6301 MCU scanning the keys and joyports and outputting a serial data stream to one of the ACIAs, so that's a fair chunk of added cost compared to an embedded parallel keyboard interface)

 

The Amiga had the opposite problem though. Just the 1000, no low-end console model until 1987 (the same year the ST got its desktop model ;)), so both artificially limited their markets with that. (the lack of a lower end model really hurt in Europe, but probably was rather significant in the US too . . . hell, one cone cargue and even lower end model than the 520 ST would have been merited, at least in the EU market)

 

The tight engineering in the Apple II and the Atari 2600 is what allowed only the best and brightest of programmers/engineers to work with the platform. There was little room for incompetence. And if you slacked, it would show up almost immediately to the point of un-usability, and naturally got itself filtered out. This "way-of-working" remained with the 400/800 and C-64, and the IBM-PC. But, today we have so much computing power, we can hide a 400LB fatman in a size 3 dress! And make it look good. That doomed the Amiga, too much computing capability that remained unfocused and bloated away by overly complex software.

Yes, but forcing programmers to do that is never good. ;) If you can establish strong market share, you'll get programmers to push things either way, but if you take 2 machines with all else equal (raw capabilities, marketing, pricing, expandability, brand recognition, etc), the one that's easier to develop for will get the better support. (either higher quality or quantity, or both -and weaker programmers will end up with better software)

 

Of course, if you have a generally weaker market position, being easy to program for can be a critical saving grace while difficulty (or simply not complying to "standards") can really kill 3rd party support. (the 7800's architecture was certainly problematic in that sense, Jaguar, Saturn, etc, etc -oversimplification, but certainly one facet of things -plus, it's jot just hardware, but API/tool support than can limit things . . . with the 3DO you had the extreme opposite, good high-level tool support but a total inability to optimize at low level -3DO wouldn't allow it, zero low-level programming documentation and 3Do handled all final approval and encryption -so the only way around that would be to go unlicensed and reverse engineer the hardware, and even then you'd be screwed by incompatibility between certain models ;))

 

A brief touch on expansion, much of the Apple II expansion options, and IBM options fit into the main case, somehow. If it was outside the case then that meant it meant to be played with and handled by a user. If you look at the Amiga, they had processors hanging off the side, hard disks hanging off the side. Modems too, same thing. The IBM and Apple could have that stuff installed *INSIDE*. Sure there were external drives, modems, for the II and PC, but you had options! In or out.

You talk like Apple and PC expansion wasn't meant to be handled by the user or that hapless users couldn't rely on technical help for Amiga/ST/etc expansion if needed. ;)

 

Having built-in expansion is nice to have and is more flexible and clutter free (and especially professional looking), and that's exactly why I said it should have been a standard feature for high-end (or mid range) models from the start. The lower-end (console and slim/pizzabox type desktops) models would have basic external expansion slots compatible with the large internal arrays in bigger models and could either be used by one (maybe 2) external expansion boards/carts/modules, or could accept a full external expansion box. (like the 1090XL)

The Laser 128 also did that. (one Apple IIe compatible slot that could be expanded for a few slots with an external module -a shame the IIc didn't do that, instead it took a proprietary route with new expansion connectors)

 

The Spectrum mainly got expansion carts, as did the C64 (and VIC), not a full array of slots. (RAM, sound expansion, CP/M carts, music sampler/tracker carts, DACs, etc, etc -or on the Spectrum 16/48k, you needed expansion for joysticks and cart slot as well)

 

This always gives me a laugh! Pray tell how you'd run this in professional workspace, or even at home with kids bouncing off the walls? I don't even know what some of this stuff is..

 

post-4806-0-90868300-1305950528_thumb.jpg

Yes, that's a really shitty example of how to set-up external expansion :lol:

 

Of course, they DID quickly abandon that idea (given that 1 or 2 piggybacked modules were the max to practically use) and introduced a PC-like professional looking big-box expansion chassis . . . unfortunately it was very expensive and TI made the stupid decision not to expand their range of machines at all. (had they introduced a proper, professional class PC-like form factor with that expansion chassis integrated with the motherboard in a desktop form factor, they very well may have built a strong market position for themselves . . . instead they kept pushing in the low-end market with bad marketing techniques that lost to CBM -their idiotic closed software market was a huge part of that too- they had no mid-range console or desktop models, no models with more than 256 bytes of RAM out of the box -plus 16k DRAM for video- and no simple RAM expansion -that expensive expansion chassis added RAM, but few used that -the emphasis on ROM based software also hurt things, on top of their closed software market, and closed peripheral market for that matter)

 

Hell, TI probably should have launched a higher-end desktop format version of the TI99/4 back in '79.

 

Having a closed architecture (hardware and software -including expansion) was crippling to TI, limiting their range of products so severely hurt that more so -and limiting software mainly to carts. (if apple had done that, in spite of the nominal internal expansion, the Apple II probably would have done quite poorly . . . their open expansion architecture allowed 3rd parties to push the machine when Apple was content to let it stagnate :P )

IBM actually DID try to go proprietary after the fact (PCJr and PS/2 both did that to some extent), but their pre-existing open standard won out to their updates. ;) (though some of the better updated features were adopted and cloned by 3rd parties -Tandy obviously ran with the PCJr and gave the architecture a real market and the PS/2 obviously spurred support for 3.5" floppies and VGA)

IBM would have been better off trying to compete directly with clone manufacturers rather than trying to block them. ;) (they ended up doing that eventually, but lost a tons of ground before that point . . . if they'd played their cards right, perhaps they could have even established OS/2 as the true successor to DOS rather than windows)

 

 

 

 

 

 

 

 

 

 

 

Yeah, there is a lot to be said for "in the case" expansion. I think the mere idea that is going to happen, leads to the kind of planning that allows for expansions to become a real part of the machine, and not some external device.

 

The difference is often subtle, like with disk drives. It can be profound too, like the CP/M card, or the many odd video / CPU cards for the Apples.

You can do all those things external too, it's more about getting support than how the expansion interface it achieved. (though, as you say, internal expansion can boost that too . . . the thing is that external expansion is still pretty damn good, but many machines totally lacked that as well -the ST got expansion support in spite of Atari making it closed box, but most things had to be hacked -the Atari 8-bits were the same until PBI, and that came pretty late and then got dropped as well -you had a lot of hacks for expansion, including things like the mosaic board that clipped onto exposed pins on the bottom of the 400/800's PCB ;))

 

However, I was suggesting that internal expansion slots SHOULD have been a standard feature on the higher-end models, with lower end models offering a cheap integrated expansion slot/port that could be used for some simple plug-in modules as well as a full expansion box adding the array of internal slots of the higher-end models. ;)

 

 

 

No-one did that though, that's the problem . . . well technically the laser 128 DID do that (with the real Apple IIe being the "high-end" version), but really, no manufacturer was offering a proper range of low to high end machines, let alone with a standardized expansion architecture. (Atari's PC-1 also sort of did that with the single external mounted ISA slot with expansion module support . . . and the Amstrad PC-20/200 with the top mounted external ISA slot)

 

 

 

Hell, from Atari Corp's perspective, it probably would have been smart to use the same 62 pin (or 98 pin) connector used for PC ISA expansion. Not necessarily aim at pin compatibility, but just the same connector to make it somewhat easier for 3rd party expansion card manufacturers to produce cards for the machine. (they COULD have made it pin compatible -of course, the cards would still have to comply with the ST's bus rather than PC ISA -unless Atari actually supported the ISA bus architecture via glue logic, but that would be less optimal and less cost effective -plus, an optimized expansion port for the ST might have managed a 16-bit expansion bus within just 62 pins and thus omitted the cost/space needed for the added 36 pins for 16-bit ISA -which is where an incompatible pinout would come in handy)

 

 

 

 

 

 

 

DMA as implemented is very clever, given the limited design. Also appreciate how the video was done in parallel with the CPU. This was done on the CoCo as well, and to me, it's a great feature, though it does complicate things like DMA. On the flip side, it's a serious speed boost, which matters on the Apple, as it's running at 1Mhz.

Yes, the BBC Micro, C64, ST, and Amiga do that as well I believe. Interleaved memory accessing becomes much less attractive in other contexts though . . . even the BBC Micro must have needed pretty fast RAM for the time to pull that off with a 2 MHz 6502 (probably as fast as the RAM in the ST). By comparison, the RAM used in the Atari 8-bit was almost certainly significantly slower (and cheaper) than what was in the beeb (probably similar speed to what's in the Apple II), but it still managed to do significantly better than 1 MHz due to efficient serial bus sharing. (around 1.2 MHz average performance with video enabled -about 1.6 MHz with video DMA disabled iirc -the rest of the hit is from refresh)

 

Actually, that's probably what the Apple II should have switched to for later models. (with an updated video system scanning to a line buffer in hblank, a 2 MHz 6502 could have been used with the same RAM, though somewhat less than 2 MHz nominal performance -still a significant boost with minimal cost increase)

 

Likewise, for the ST or Amiga, going forward with faster CPUs would have been more efficient with the interleaving scheme dropped in favor of serial bus sharing with wait states. (not to mention optimization of the custom chips for fast page mode -of course, with fastRAM in the Amiga, you sidestep that problem altogether with the more expensive option of adding another bus -more expensive compared to optimized serial bus sharing, but still more cost effective than some other options like using fast/expensive DRAM/SRAM/VRAM, and/or wider buses -though a 32-bit shared bus could also be a fairly efficient option, especially if you were going to a 32-bit wide CPU bus -then you'd need 32-bit wide fastram anyway, and that would definitely be more expensive than a shared 32-bit bus, but dual 16-bit buses could make sense too, various R&D investment trade-offs on top of that too)

 

Or the CoCo for that matter . . . a 1.79 MHz 6809 with Atari-style wait states (effective 1.2 MHz) would have been much nicer than the interleaved video DMA, albeit it should have needed a faster rated CPU (except the CoCo already supported the address dependent mode to allow 1.79 MHz working in ROM). For that matter, the Apple II, CoCo, or any other system using interleaved DMA primarily with video could have disabled interleaving in vblank and doubled the CPU speed. (ST and Amiga had primary DMA interleaved as well -floppy disk, sound, blitter, hard disk, etc, so a bit of a different case)

In that respect, the apple II and CoCo both could have managed a roughly 26% performance boost (62% in 50 Hz PAL video), assuming the CPUs could run at 2x speed in RAM without wait sates. (the A8's DRAM refresh overhead cut performance by about 12%)

 

The software disk proved to be useful. Yeah, no DMA, but it was fast. Notably faster than the Atari I had at home at the time. All the bizzare copy protect methods were intriguing, as were options to expand the disk size, and do all sorts of things with the drive. I think it was a smart move at the time.

No DMA on the A8 or CBM floppies either . . . a separate serial interface with the CPU pulling data off the serial bus. (unless I'm mistaken, the CIA/VIA in the VIC/C64 or POKEY require CPU assistance to move data into memory)

 

Also, the speed of the apple's interface would be largely due to the software end (ie speed of the loading program in Apple DOS). As mentioned above, the Atari floppy drives could be driven considerably faster (3-4x faster) without any hardware modification (ie could have been done in later Atari DOS updates), unlike the C64 which had the DOS embedded in ROM onboard the 1541. (though custom loaders worked around that to get 5-10 fold increases in speed -albeit 10 fold was still only slightly faster than the normal 19.2 kbps of the Atari drive, and well short of the ~60-70)

Edited by kool kitty89
Link to comment
Share on other sites

I don't understand some of your comments.

 

The C64 loses CPU cycles due to video DMA, as does the Atari. Both of those machines would have operated considerably faster, had the video system been done Apple / CoCo style.

 

The Apple II does not, as it's video DMA and refresh are combined, and on the early clock phase, invisible to the 6502, which runs at a full clip, but for a stretched cycle at every 65th cycle. (done to get video in spec, I think) The same is true of the CoCo, using a two-phase clock to keep from interrupting the 6809.

 

There would be no speed improvement from being able to turn video off, as it's invisible to the CPU. I always found the Apple and the CoCo notable for doing the video system that way. Interleaved DMA is quite costly! Woz took advantage of the narrow timing window where the 6502 actually latches it's data, as Tandy did with the 6809. I don't know how the CoCo was clocked, particularly the III, but the Apple runs on a 14Mhz clock that gets divided down to produce those timings needed for the video to operate with the CPU, not stealing cycles.

 

A CoCo has the 2X CPU clock, with video still not interrupting the CPU, built in, just not turned on by default. Never did understand that decision fully. I also am not sure about CoCo refresh. That might still be in play, though the video isnt. CPU speed does not change with character modes, vs higher resolution bitmaps, for example. At the 2X clock, it's 1.7Mhz --and pretty fast at RAM access, with ROM clocked down on those access cycles.

Edited by potatohead
Link to comment
Share on other sites

The Apple II still displays video when the Disk ][ is accessed. The processor & user program/game/app just does not update anything. The 'bitmap' remains static in ram. To the end user it would look like the machine has frozen. But come hell or high-water, the video timing is fixed in hardware and nothing interrupts it.

 

Is that what we're discussing, sorry I gotta get outta here right now

Link to comment
Share on other sites

Yeah, come back later :)

 

Even broken apples will display something, much of the time. It's random junk, but it is displayed anyway.

 

A thought just occurred to me: Are there books for the CoCo / Atari / C64 written to the level of detail that were written for the Apple, with regard to timings, and precise system operation?

Edited by potatohead
Link to comment
Share on other sites

Or the CoCo for that matter . . . a 1.79 MHz 6809 with Atari-style wait states (effective 1.2 MHz) would have been much nicer than the interleaved video DMA, albeit it should have needed a faster rated CPU (except the CoCo already supported the address dependent mode to allow 1.79 MHz working in ROM). For that matter, the Apple II, CoCo, or any other system using interleaved DMA primarily with video could have disabled interleaving in vblank and doubled the CPU speed. (ST and Amiga had primary DMA interleaved as well -floppy disk, sound, blitter, hard disk, etc, so a bit of a different case)

In that respect, the apple II and CoCo both could have managed a roughly 26% performance boost (62% in 50 Hz PAL video), assuming the CPUs could run at 2x speed in RAM without wait sates. (the A8's DRAM refresh overhead cut performance by about 12%)

The GIME from the CoCo III lets the CPU run at 1.79MHz for all memory addresses.

Motorola supposedly developed a new SAM that did the same for older CoCos but they never released it.

The existing SAM could do it for all addresses but it disabled RAM and video refresh where the GIME does not.

The circuit diagram of the SAM doesn't look too complex so a drop in replacement should be doable in a small PLD.

Link to comment
Share on other sites

Up until the mid-80s Apple enjoy far better financial returns and stability than anyone battling in the 8-bit consumer market. It didn't matter that in many ways their technology was severely dated. The Apple II line didn't even have upper and lower case letters until the IIe. What really matter was that they were taken seriously by people who had serious money to spend and Jobs has always been happier catering to the affluent. It didn't hurt that they managed to work scams in several state, including CA under Jerry Brown (I can't believe this guy is governor again) to give computers to schools in exchange for tacx credits of value far greater than the cost of the donation to Apple, as the valuation was based on the retail price of the donation. Apple would then pretend to be discounting that while still being well ahead on the actual cost.

 

By the mid-80s, Jobs actively wanted the Apple II to go away. He actively sought to kill the GS before launch and managed to forced crippling engineering choices. The worst being the fixed 16KB video RAM. This made it painful to do the simple page flipping that was the staple of animation and scrolling on the Apple II. The audio processor was very advanced for the era but implemented on the board in such a way as to be cripplingly constrained. This was fixed somewhat in a later revision that gave it a good-sized chunk of its own RAM, relative to the era. Everybody working with the prototype Courtland boards knew this was a serious problem.

 

On the software side, it isn't clear what, if anything, was done to sabotage that but by the time there was a decent OS for the IIgs it was hardly worth bothering anymore.

 

In a saner world, the IIgs would never have been created. It made far more sense to give the Apple II owners an upgrade path to the Macintosh that would preserve their existing software investment and open them up to what the Mac had to offer, rather than creating an almost entirely new platform that Apple didn't really want to succeed. In fact, Apple did do this with an optional board for the Mac II LC (IIRC) but only after much, much suffering by the Apple II faithful.

 

Imagine the schism if Atari and Commodore had been flogging 65816 extensions of their 8-bit systems at the same time they were trying to sell the ST and Amiga. I once asked Leonard Tramiel about it and he said it was strongly considered but ultimately there were too many hassles in trying to move forward staying backward compatible. Better to cut the cord and move on. A big issue for Atari at the time was the need to create a new but compatible chipset. This would have taken far too long and required far more resoruces than they had could muster. (Consider how long the Amiga chip set was in development before the first model became a shipping product.)

 

If they had enough cash they have made a bid to pick up the Mindset and made it more of a consumer oriented system. The big bonus here, in addition to having an Amiga-like chip set, was using an x86 CPU. They could have been offering hardware acceleration for Windows before ATI. But that is another reality branch point we'll never know.

Link to comment
Share on other sites

Or the CoCo for that matter . . . a 1.79 MHz 6809 with Atari-style wait states (effective 1.2 MHz) would have been much nicer than the interleaved video DMA, albeit it should have needed a faster rated CPU (except the CoCo already supported the address dependent mode to allow 1.79 MHz working in ROM). For that matter, the Apple II, CoCo, or any other system using interleaved DMA primarily with video could have disabled interleaving in vblank and doubled the CPU speed. (ST and Amiga had primary DMA interleaved as well -floppy disk, sound, blitter, hard disk, etc, so a bit of a different case)

In that respect, the apple II and CoCo both could have managed a roughly 26% performance boost (62% in 50 Hz PAL video), assuming the CPUs could run at 2x speed in RAM without wait sates. (the A8's DRAM refresh overhead cut performance by about 12%)

The GIME from the CoCo III lets the CPU run at 1.79MHz for all memory addresses.

Motorola supposedly developed a new SAM that did the same for older CoCos but they never released it.

The existing SAM could do it for all addresses but it disabled RAM and video refresh where the GIME does not.

The circuit diagram of the SAM doesn't look too complex so a drop in replacement should be doable in a small PLD.

 

The real shame is that Motorola had a high powered AV chip set for use with the 6809 and intended for arcade machines and home computers. Nobody ever used it in the consumer market. It had some very impressive features for that era. It was likely a cost issue, as I never saw any quotes on pricing for mass quatities. Only a dev board that was about a thousand bucks and mainly sent around to companies doing arcade games.

Link to comment
Share on other sites

The real shame is that Motorola had a high powered AV chip set for use with the 6809 and intended for arcade machines and home computers. Nobody ever used it in the consumer market. It had some very impressive features for that era. It was likely a cost issue, as I never saw any quotes on pricing for mass quatities. Only a dev board that was about a thousand bucks and mainly sent around to companies doing arcade games.

I didn't think that chipset was ever released.

The chipset was supposed to support the 68000 as well and should have been released years before the CoCo III.

It would have made the CoCo a tough competitor and would have also left an opening for a 68000 based machine from Tandy.

 

Hmmm... imagine a machine with a 6809 and 68000 as a transition machine.

It could have been interesting. But what would Tandy have done for an OS on a 68000?

Link to comment
Share on other sites

I think a lot of it had to do with Jobs. He is a control freak that's why the Mac had a closed architecture. The Mac was his baby much the same way the NeXT was his too. He had little to do with the Apple II. That was all Woz's baby. Woz is an amazing engineer and Jobs is an amazing PR/Ideas guy. They on purposely crippled the IIgs. Why even design the Mac? You have a backward compatible machine with excellent sound/video capabilities with a color GUI. It's 16bit just like the Mac. With open architecture. How could they not have won with that machine? By limiting it's processor speed to forgive me if I'm wrong, 4mhz? That's how. Not to mention all the crooked deals Jobs was giving the University circuit to purchase Macs/requiring students to purchase macs.

Link to comment
Share on other sites

For those of you that don't know about this site -- http://www.folklore.org -- it's got some good little short stories about the early years. Focusing on the Mac and II series! I always find it amusing to know that Steve J. insisted the Mac's motherboard go through some revisions because it wasn't pretty enough.

Edited by Keatah
Link to comment
Share on other sites

The GIME from the CoCo III lets the CPU run at 1.79MHz for all memory addresses.

Motorola supposedly developed a new SAM that did the same for older CoCos but they never released it.

The existing SAM could do it for all addresses but it disabled RAM and video refresh where the GIME does not.

The circuit diagram of the SAM doesn't look too complex so a drop in replacement should be doable in a small PLD.

Hmm, interesting, I wonder how fast the DRAM needed to be to facilitate 1.79 MHz CPU operation without wait state facilities for video/refresh. (and when that got cheap enough to be acceptable . . . probably a good bit earlier than 1986 when the CoCo III was released ;))

Same for the Apple II for that matter. (ie, without added wait state facilities, when was the earliest it could have used 2.04 MHz, or 1.79 MHz for that matter -plus the simple vblank speed boost could be possible as long as DRAM refresh was fast enough to not need wait states)

 

 

 

 

...and the 140x192 graphics were pseudo 6 colors, in all machines but the early rev 0 boards, which did not include the phase shift capability. Apples had the first "color cell" kind of feature, where clashing occurs on byte boundaries, due to color definitions not being pixel addressable.

I thought it was 7-bit boundaries, not 8-bit. (so rather a pain to manage with software rendering)

 

With 4 colors, I was referring to the common way most games used them (outside of cutscenes or title screens), you basically could use 1 or 2 4 color palettes for a full bitmapped display (any pixel on a 140x192 grid could be any color). You either got black+blue(cyan)+orange+white or black+green+magenta+white.

As it happens, one of the coco's 4 color palettes is white+black+cyan+orange (though there's no good substitute for the other apple palette), so you'd have 128x192 with the same colors to use. (though without artifact colors or odd screen addressing, plus a more powerful CPU and DAC rather than beeper)

 

That's also not counting the double highres 16 color artifacts.

 

 

 

 

 

I don't understand some of your comments.

 

The C64 loses CPU cycles due to video DMA, as does the Atari. Both of those machines would have operated considerably faster, had the video system been done Apple / CoCo style.

OK, my mistake on the C64, but the rest still applies. Interleaving a la apple/coco can be inefficient, and the Atari is a great counter-example . . . what would you rather have, a 1.79 MHz CPU with wait states making it ~1.2 MHz, or a 1 MHz CPU without contention?

 

If you want a 1.79 (or 2) MHz CPU without contention, you'll need faster (expensive) RAM, if it's even available at all.

A 2 MHz 6502 in the Apple II with serial bus sharing for video (with wait states) would have mean using the same RAM speed and generally same cost (added complexity for efficient wait states -though simpler if you could restrict it to hblank loading to a line buffer like Atari) would have been rather nice. (just like if you wanted a faster 68k in the ST or Amiga, disabling interleaving in favor of wait states would be most efficient)

 

Though, again, even using 2 MHz (or 1.79) in vblank with 1 MHz interleaved in active display would have been a nice boost over the Apple II. (that's assuming you didn't need added waits for refresh or such)

 

The Apple II does not, as it's video DMA and refresh are combined, and on the early clock phase, invisible to the 6502, which runs at a full clip, but for a stretched cycle at every 65th cycle. (done to get video in spec, I think) The same is true of the CoCo, using a two-phase clock to keep from interrupting the 6809.

I understand how that's done, but it would have been a detriment to the Atari if they'd done that.

 

The Atari's bus sharing scheme is already more efficient and advanced than what the apple does . . . the 7800 took that further with double buffered scanlines allowing video to exceed vblank (for much more complex display lists).

Interleaved/hidden bus sharing requires a slow CPU relative to memory speeds, so it would be much more expensive (or a much later release) to have apple II type bus sharing at 2 MHz. (the BBC Micro almost certainly was expensive in part because of that 2 MHz interleaving)

 

So, with the same memory as in the Atari machines, you'd need to cut the CPU clock speed considerably to allow interleaving.

 

Another nice advantage of serial bus sharing is more direct upgrades to faster CPUs. Once faster RAM did get cheaper, the Atari could have bumped to a 3-4 MHz 6502 with wait states for video/refresh. (depending on the exact clock speed used, it could have been a gradual upgrade too . . . looking at NTSC compatible clock divisions/multiplications, it might have been 2.04 MHz, 2.38 MHz, 2.86 MHz, or 3.07 MHz with 2 or 3 MHz 6502s, or 3.58 MHz or beyond with 4+ MHz 6502s or C02s)

 

 

There would be no speed improvement from being able to turn video off, as it's invisible to the CPU. I always found the Apple and the CoCo notable for doing the video system that way. Interleaved DMA is quite costly! Woz took advantage of the narrow timing window where the 6502 actually latches it's data, as Tandy did with the 6809. I don't know how the CoCo was clocked, particularly the III, but the Apple runs on a 14Mhz clock that gets divided down to produce those timings needed for the video to operate with the CPU, not stealing cycles.

Actually, it's not so costly, it's generally cheaper if the CPU is so slow anyway. ;) With a faster CPU with bus contention (refresh, DMA, etc), you need a good (fairly complex) wait state mechanism to optimize CPU work time. (a really weak/basic set-up might limit CPU access to vblank, or not allow video and CPU to be used together at all -like the ZX80 or 81- though in those cases, you need to at least make sure that refresh is hidden to CPU accesses)

Edited by kool kitty89
Link to comment
Share on other sites

The real shame is that Motorola had a high powered AV chip set for use with the 6809 and intended for arcade machines and home computers. Nobody ever used it in the consumer market. It had some very impressive features for that era. It was likely a cost issue, as I never saw any quotes on pricing for mass quatities. Only a dev board that was about a thousand bucks and mainly sent around to companies doing arcade games.

I didn't think that chipset was ever released.

The chipset was supposed to support the 68000 as well and should have been released years before the CoCo III.

It would have made the CoCo a tough competitor and would have also left an opening for a 68000 based machine from Tandy.

 

Hmmm... imagine a machine with a 6809 and 68000 as a transition machine.

It could have been interesting. But what would Tandy have done for an OS on a 68000?

Tandy did have a 68000 based machine with the TRS-80 model 16 . . . sort of. ;) (my dad upgraded his model 2 with the 68000 board and a hard drive in the mid/late 80s)

 

As for the OS, for their 68k stuff, they used Xenix and the rudimentary TRSDOS-16. (the latter mainly using the 68000 as an accelerator/coprocessor with the Z80 handling most of the OS work)

So Xenix looks like a safe be for any CoCo derivative. (a 6809/68k hybrid would have been easier to interface too, but it seems that wasn't too much trouble to manage with the Z80 of the model 2 -probably with some glue logic)

 

That 68k expansion really extended the life of the model 2 as a good business machine, too bad it hadn't gotten pushed more. (let alone some evolutionary extension that actually merged the model 1/2 standards in the early 80s rather than further diverging and adding the CoCo on top of that -the 6809 was neat, of course, but there's a lot to be said about compatibility and expandability of an established architecture . . . plus the Z80 isn't too bad itself, especially if bumped in speed -albeit the model II already had a full 4 MHz Z80)

 

Actually the model II was probably the best serious science/business machine on the market until the PC came out. (maybe some of the better CP/M machines were better values, but the Model II was pretty nice for the time -totally void of home/casual features of course, like the PC)

 

 

 

Edit:

Hmm, I wonder why MS/IBM didn't use Xenix instead of DOS for the PC. (they'd released it a year earlier and ported it to x86 too)

Edited by kool kitty89
Link to comment
Share on other sites

Tandy did have a 68000 based machine with the TRS-80 model 16 . . . sort of. ;) (my dad upgraded his model 2 with the 68000 board and a hard drive in the mid/late 80s)

 

As for the OS, for their 68k stuff, they used Xenix and the rudimentary TRSDOS-16. (the latter mainly using the 68000 as an accelerator/coprocessor with the Z80 handling most of the OS work)

So Xenix looks like a safe be for any CoCo derivative. (a 6809/68k hybrid would have been easier to interface too, but it seems that wasn't too much trouble to manage with the Z80 of the model 2 -probably with some glue logic)

Xenix is a Unix type OS. Not exactly good for consumers.

 

That 68k expansion really extended the life of the model 2 as a good business machine, too bad it hadn't gotten pushed more. (let alone some evolutionary extension that actually merged the model 1/2 standards in the early 80s rather than further diverging and adding the CoCo on top of that -the 6809 was neat, of course, but there's a lot to be said about compatibility and expandability of an established architecture . . . plus the Z80 isn't too bad itself, especially if bumped in speed -albeit the model II already had a full 4 MHz Z80)

4MHz on a Z80 is nothing like 4MHz on a 6809. The Z80 requires a lot more clock cycles per instruction, and in my own code takes around 50% more instructions than the 6809. There were also operating systems like FLEX and OS-9 for the 6809 and they had a lot of software. Since OS-9 was popular on the CoCo I would guess that they would use the 68000 version for any new machine. But then it was a Unix like OS as well.

 

Actually the model II was probably the best serious science/business machine on the market until the PC came out. (maybe some of the better CP/M machines were better values, but the Model II was pretty nice for the time -totally void of home/casual features of course, like the PC)

The Model II was a pretty closed system, it was HUGE, and expensive.

 

 

Edit:

Hmm, I wonder why MS/IBM didn't use Xenix instead of DOS for the PC. (they'd released it a year earlier and ported it to x86 too)

But I think Xenix was licensed and MS bought CPM-86 outright.

Link to comment
Share on other sites

4MHz on a Z80 is nothing like 4MHz on a 6809. The Z80 requires a lot more clock cycles per instruction, and in my own code takes around 50% more instructions than the 6809. There were also operating systems like FLEX and OS-9 for the 6809 and they had a lot of software. Since OS-9 was popular on the CoCo I would guess that they would use the 68000 version for any new machine. But then it was a Unix like OS as well.

I was comparing a .89 (or 1) MHz 6809 to a 4 (or 3.58 MHz) Z80. ;) I know the '09 is a lot faster per clock (like the 6502 and 6800, but better ;)), but I wasn't thinking of comparable clock speeds, but comparable processor speeds available for the time. (the Z80 also doesn't need as fast memory, though I think it needs memory able to respond within 2 clock cycles to avoid wait states -though there's a lot of cases where it would be 4 clock cycles . . . the Z80 derivative in the GB/GBC was hard coded for 4 cycle access -so slowing some things down, but allowing RAM to be 1/4 the CPU speed -for a regular Z80, you'd need to configure that externally)

 

By the time the CoCo 3 was released, they should have been able to do much better than 3.58/4 MHz. (7.16 MHz or 8 MHz . . . or 5.37/6 MHz at the very least)

 

Actually the model II was probably the best serious science/business machine on the market until the PC came out. (maybe some of the better CP/M machines were better values, but the Model II was pretty nice for the time -totally void of home/casual features of course, like the PC)

The Model II was a pretty closed system, it was HUGE, and expensive.

Weren't the internal expansion slots pretty flexible?

 

And it was expensive in 1981 compared to the IBM PC? (with similar configuration)

 

The 8" floppy became obsolete pretty quickly though. (interesting to note that the business standard PC8801 and PC9801 in Japan used 8" rather significantly in the early 80s -but 5.25" became pretty standardized)

 

Edit:

Hmm, I wonder why MS/IBM didn't use Xenix instead of DOS for the PC. (they'd released it a year earlier and ported it to x86 too)

But I think Xenix was licensed and MS bought CPM-86 outright.

I think you're earlier mention of Xenix not being very consumer attractive compared to DOS would be a better argument.

 

Also, MS never licensed or bought CP/M (IBM tried to, but that's another story). MS licensed QDOS (from Seattle Computer Products) and then later bought the rights entirely and licensed it to IBM. (QDOS itself being sort of a CP/M clone)

 

Heh, too bad IBM hadn't done their research better and cut out the middleman and bought QDOS themselves. :P (hmm, IBM having full control/exclusivity for the PC's OS would have been interesting -might have limited clone competition)

 

Edit: nevermind, assuming this is accurate:

http://en.wikipedia.org/wiki/86-DOS

In later discussions between IBM and Bill Gates, Gates mentioned the existence of 86-DOS and IBM representative Jack Sams told him to get a license for it.

 

So IBM did know, but opted to use MS as a middleman for whatever reason. (had MS license and adapt it to the PC rather than IBM buying it themselves and doing any necessary reprogramming work in-house)

Edited by kool kitty89
Link to comment
Share on other sites

@kool_kitty: The CoCo ran interleaved at somewhere around 1.7Mhz with a 6809.

 

So here's where I struggle with that decision. I agree with you about cranking up CPU clocks, and the impact of that on RAM. But, cranking up the clocks never actually happened!

 

The CoCo 3 can do 160 bytes, or more DMA for video, every scan line, and it can do so, while the 09 is running at a clock on par with the Atari machine. Ataris fetch far fewer bytes per line, significantly limiting what the overall graphics system could do.

 

So, can't we flip this equation?

 

Seems to me, graphics systems could be upgraded for more DMA, and or extended functions, colors, etc... without actually impacting anything that matters otherwise, when things are don the Apple / CoCo way.

 

Looking at what was done with the CoCo 3 is intriguing, because it fundamentally can do what the older computer did, while adding very significant new graphics capability. The CoCo 3 was a nice jump up. So was the //e, with double high-res, basically doubling the video path, without impacting anything.

 

Often, I wonder about the 8 bitters being static, and I think that's a big part of why.

 

Finally, what was the RAM speed used in the Atari machines? The Apple RAM was slow actually. 450ns, or something like that.

 

Surely 200ns RAM would have made things work at 2Mhz, or 1.7?

 

Or, more interestingly, let's say the Atari had to run at VCS clock for it to work, but ANTIC + GTIA could fetch 80, 120 bytes per scan line??

 

We could have a 16 color machine, out of 256, sprites and such, and lots of CPU, running very consistently for tricks!

 

Peripherials would have been simpler, and the original chipset could have been extended to bigger color sets, sprites, etc... without anywhere near the impact it would have otherwise.

 

So, if you look at raw throughput, yeah the greater efficiency is there. But, a holistic view of what the machine could have been, I think it's inferior overall.

Edited by potatohead
Link to comment
Share on other sites

OK, my mistake on the C64, but the rest still applies. Interleaving a la apple/coco can be inefficient, and the Atari is a great counter-example . . . what would you rather have, a 1.79 MHz CPU with wait states making it ~1.2 MHz, or a 1 MHz CPU without contention?

 

I think I would take the 1.2, with the video system operating transparent to the CPU, able to fetch a lot of data per scan(more than the current arrangement), no timing issues, VCS style :)

Link to comment
Share on other sites

@kool_kitty: The CoCo ran interleaved at somewhere around 1.7Mhz with a 6809.

CoCo I and II were fixed at .89 MHz in DRAM due to the limits of DRAM and (later) the SAM's capabilities. The CoCo III ran at 1.79 MHz interleaved, but that was in 1986. (probably using similar speed DRAM as the Amiga, which ran interleaved at a similar speed -the 68k just takes 4x as many clock cycles to complete an access than a 6502/6800/6809 and the Amiga's DMA set-up basically split a 3.58 MHz bus into 2 1.79 MHz buses for the CPU/chips to use, with the additional ability for the chips to steal 100% of the bus for burst DMA at full bandwidth)

 

 

So here's where I struggle with that decision. I agree with you about cranking up CPU clocks, and the impact of that on RAM. But, cranking up the clocks never actually happened!

The question is whether cranking up speeds would have been possible in the first place using the same memory . . . or, rather, how fast interleaved accesses could be done. (if the Atari would have had to be cut to 1 MHz, that's not a good alternative . . . the C64 was a bit of a waste though, only 1 MHz and in 1982 -it should have either been interleaved or 2 MHz with wait states)

 

Or, you could have interleaving AND wait states for faster CPU time in vblank as well as consistent CPU and DMA time in active display and hblank. (vs the alternative wait states for burst DMA like the 7800 used -that sort of mechanism would become much more important with FPM DRAM)

 

There's also the issue of how fast the video/DMA accesses can be made. (if you speed up the CPU and RAM later on, but don't improve DMA/video, you'll waste more bandwidth in either case -burst or interleaved DMA- though in the interleaved case, you might be able to rig it so that 2 CPU accesses can be interleaved in the same slot that 1 had before -not speeding up the video hardware would be more an issue with losing the original engineers and/or lacking new engineers capable of working with the existing hardware -a clock doubled A8 would have been pretty nice, even without added features, just doubled dot clock versions of existing modes)

 

The CoCo 3 can do 160 bytes, or more DMA for video, every scan line, and it can do so, while the 09 is running at a clock on par with the Atari machine. Ataris fetch far fewer bytes per line, significantly limiting what the overall graphics system could do.

GTIA+ANTIChave to load a scanline in hblank, right? I assume that means it's a single buffered scanline in on-chip line RAM, so interleaved DMA wouldn't boost video at all.

If you DID have double buffering, you could use burst DMA at any time and potentially saturate the bus to the extent that the CPU could nearly only work in vblank (which is exactly what the 7800 does). Now, one obviously problem with that is screwed up timing with the CPU for certain tasks needing consistent activity through an entire frame (like certain software driven audio processes -be it interrupt driven or cycle timed code). Albeit, you could artificially restrict the video bandwidth used if you knew such CPU processes would be necessary. (or you could have both interleaved and burst DMA modes switchable -in the 7800's case, that should havebeen especially realistic at pretty high speeds given the use of SRAM, though ROM would have been slower)

 

 

Seems to me, graphics systems could be upgraded for more DMA, and or extended functions, colors, etc... without actually impacting anything that matters otherwise, when things are don the Apple / CoCo way.

Yes, but that's not an issue related to interleaved DMA, it's an engineering issue in general. (be it boosting the clock speed of the video chips or adding double buffered scanline support, interleaved and burst DMA retain similar trade-offs . . . and similarly, as faster RAM got cheap, it would also be an issue of engineering/R&D resource to update CPU/graphics to take advantage of the increased speed -the Apple II should have been able to bump to 2 MHz by the early 80s, or earlier than that if wait states for non-interleaved accesses had been implemented -but with less than 2 MHz performance)

 

Looking at what was done with the CoCo 3 is intriguing, because it fundamentally can do what the older computer did, while adding very significant new graphics capability. The CoCo 3 was a nice jump up. So was the //e, with double high-res, basically doubling the video path, without impacting anything.

The CoCo3 was a massive jump: double CPU speed full time, added time facilities (hblank counter), hardware V/H scroll registers (take that ST and Apple IIGS :P ), 6-bit RGB palette with up to 16 indexed colors (4-bit packed pixels, take that again, ST :P) with numerous added resolutions, etc. (just about the only thing lacking was the sound hardware . . . if they'd just added a bog standard SN -or preferable AY/YM- PSG, that would have opened sound up nicely -with the DAC used occasionally for drums/SFX/etc -actually the YM2413 might have been a good option if it was ready in 1986)

 

The CoCo 3 got the sort of upgrade the ZX Spectrum could have used to become a truly persistent/evolutionary standard. ;) (had the 128k added that sort of video and CPU upgrade on top of the AY chip, it might have made for real competition against the 16-bit computers ;) -the later SAM Coupe obviously came too late and was still too weak with lack of hardware scrolling)

 

 

Often, I wonder about the 8 bitters being static, and I think that's a big part of why.

Yes, aside from the Apple II, none really had good expandability (the Atari would have if the engineers had their way, and almost did again with the 1090XL . . . ), and most lacked any sort of general evolutionary development. (albeit the ST and Amiga seriously suffered from both of those problems as well . . . the Amiga got some boosts in expandability and both got a wider range of form factors, but it was mostly just an increase in RAM and minor tweaks to the graphics end for most of their lives -the ST Blitter was rather significant, but limited to high end, Amiga fastRAM was notable, but both lacked faster CPU models for a long time and only the ST ever got a faster 68000 version with the MSTE -none had a faster low-end model though, and neither fully addressed the expandability problems, more so for the ST since low-end model continued to lack any expansion bus interface and didn't even have easy exploits for hacks like the Amiga's CPU socket -you had to desolder the CPU in the ST . . . though you'd think that custom clip-on 64-pin DIP sockets could have been made -after all, there were those clip on 100/132 pin QFP CPU replacements, and those had a LOT less to grip than on a big 64-pin DIP -something Atari could also have exploited for early models if they'd changed their stance on expansion after the fact, along with offering "official" piggyback RAM expansion at service centers and perhaps internal BLITTER upgrades)

 

But that's another topic. ;)

 

Finally, what was the RAM speed used in the Atari machines? The Apple RAM was slow actually. 450ns, or something like that.

I'm not sure of the actual speed, but 450 ns wasn't that slow at all for the time (1977), it was pretty fast actually. The Amiga only had 280 ns DRAM (the access time for RAS to RAS, not the advertised speed rating), or at least the system was configured to access the DRAM at those intervals (it might have been slightly faster, but it at least fell within that convenient interval -the NTSC color clock, or 2 clocks for the chipset or 68k in the Amiga).

The ST used 250 ns with a similar interleaving scheme iirc. (68k takes 4 clock cycles to complete an access, ST/Amiga DMA was configured to "steal" those first 2 cycles for their own access time and latch the data for the 68k to only need the latter 2 cycles on the bus, so effectively having dual port memory at 560 ns or 1.79 MHz ;) . . . or 500 ns/2 MHz for the ST)

 

Surely 200ns RAM would have made things work at 2Mhz, or 1.7?

That's extremely fast RAM you're talking about there. :P That's faster than the ST or Amiga by a good margin, and I don't think even doable for DRAM until the mid 80s at least (maybe slightly earlier if you used really high-end DRAM). 200 ns SRAM should have been available by the end of the 70s though, but that's not cheap or practical to use in high quantities. ;) (Atari might have been considering interleaving with the SRAM used in the 3200/Sylvia though, apparently it was using hybrid A8/VCS hardware too, but possibly with additional enhancements . . . except the CPU was actually left at 1.19 MHz, so interleaved DRAM may have been feasible as well)

 

Albeit, if "200 ns" and the above "450 ns" are not the actual access times, but instead other cycle times . . . DRAM is marketed under different speed "ratings" that never correspond to the random access read/write times (complete RAS to RAS time), so if you go by that, it's a totally different context than usable access times. (usually either the CAS time is used -which does make a lot of sense for FPM and later DRAM as that IS the fast page access time, like how the 75 ns FPM DRAM in the jaguar will indeed run at 75 ns for page mode, even though a true random access will take at least 175 ns ;) )

 

Random access times for DRAM don't improve linearly from generation to generation, but have increasingly diminishing returns as time goes on. With really, really old/slow DRAM, RAS time was actually faster than CAS (there are even some rather odd examples of reverse page mode where a special DRAM controller hold CAS and cycles through different rows -the Astrocade had support for that iirc), but that quickly changed with later generations as CAS times rapidly improved but RAS lagged. (to the extent that 15 ns SDRAM is only 104 ns for random accesses, and modern DDR2 or DDR3 is still only around 40 ns for a true random access, even though a page read/write is a tiny fraction of that)

There's also the issue of access time (time to respond with valid data on the bus) vs completed read/write cycle time, but for interleaved DMA, you definitely need to take the full read/write cycle time into account as the interleaved access won't be able to start until the previous read/write cycle is completed.

 

So it's a bit of a confusing mess to compare RAM speeds. (and again, I'm still not sure what the actual timing is on the RAM Atari used ;))

 

 

The trick for modern DRAM (or any efficient DRAM based system from the late 80s onward -including things like the Lynx) was to implement buffering/caching and/or separate banks of DRAM to allow efficient use of page mode accesses. Systems up to the Amiga/ST didn't have to worry about that as they didn't implement page mode and were thus stuck with pretty damn slow RAM. ;) (I think Amiga fastRAM implemented page mode support, but ECS/OSC chipram remained limited to very slow interleaved 16-bit CPU accesses iirc -the CPU couldn't even steal the bus for full bandwidth like the chipset could- and AGA didn't fix that either iirc, but it did add 32-bit chipRAM at least, but that's still only double the 1985 OCS CPU bandwidth -iirc still only the chipset could steal the bus, and copper/Alice were the only custom chips that could make use of 32-bit or page mode accesses)

 

 

Or, more interestingly, let's say the Atari had to run at VCS clock for it to work, but ANTIC + GTIA could fetch 80, 120 bytes per scan line??

Again, that would imply you had double buffered line RAM, which wouldn't require interleaved DMA to work necessarily. (again, MARIA style burst DMA should be an option too)

 

Hmm, maybe they could have done something else besides full double buffering, maybe use dual ported SRAM on-chip so the back end of the line buffer could be filled as the active part was still being scanned out to the screen. (you'd need precise timing to ensure that you didn't overwrite something that had yet to be spit out to the screen ;))

 

That is, unless I'm misunderstanding the mechanics of ANTIC/GITA's video output. (it was my impression that a display list is used to build up a single scanline into a buffer and spit it out to the screen, assuming it's single buffered, that would require the entire line to be loaded within hblank; I know the 7800's MARIA specifically implemented double buffering of line RAM to allow an entire H-time to load the next line via burst DMA -not sure of the exact per-line bandwidth figures)

 

 

Peripherials would have been simpler, and the original chipset could have been extended to bigger color sets, sprites, etc... without anywhere near the impact it would have otherwise.

With interleaved DMA, you'd quickly hit a wall as time went on (ST and Amiga speeds weren't that far ahead . . . it would have meant a 1.79/2 MHz 6502 with interleaved DMA at ST/Amiga bandwidths when you alternatively could have had a 3-4 MHz or better 6502 with contention for burst DMA -and that's assuming you didn't push for an added cache/buffer for the CPU to work off the main bus, or add a 2nd DRAM bus for CPU and graphics to work in full parallel -added buses add cost, of course, so contention is the cheap option). Adding fast page mode support in the late 80s could have been a massive boon to performance and directly compliant with an already serial optimized bus configuration. ;)

 

Interleaved DMA is good for very slow systems and quickly beaks down beyond that. One possible option to grandfather an interleaved single-bus optimized design over to the world of page mode DRAM might have been to implement multi-bank interleaving, but that requires double the RAM chips on top of most of the features needed for a burst DMA oriented system. (you'd still need wait state support for page breaks and refresh)

 

 

This was all stuff that came up in some Jaguar discussions, including one particular case when I asked why they couldn't just use a 50/50 split for the 68k like the ST or Amiga. (the answer, of course, is that the jaguar is already far, far more efficient than the slow interleaving schemes in the ST and Amiga, and the 68k in the jaguar is too fast to even allow 1 interleaved access to be "stolen" within its first 2 cycles -with the 75 ns FPM DRAM used in the Jaguar, you'd have to cater to 175 ns random accesses and thus use slower clock speeds or many wait states and only end up with a tiny fraction of the bandwidth the current system allows . . . the main problem with bus sharing and bandwidth that system is just that some components and operations aren't buffered for peak bandwidth -Jerry, the 68k, and some blitter operations -namely texture mapping- fall into that category, but that's another topic too ;) -as is the more costly option of a multi-bus/multi-bank design, the epitome of which can be seen with the Sega Saturn :P)

 

 

 

So, if you look at raw throughput, yeah the greater efficiency is there. But, a holistic view of what the machine could have been, I think it's inferior overall.

Inferior compared to what alternatives???

 

Interleaved DMA won't automatically allow more bandwidth for video or CPU . . . it MIGHT if the CPU or DMA chips are so slow that cheap RAM of the time can be interleaved at 2x the speed, but that wouldn't even mean the chips could make use of that bandwidth. (again, I don't think ANTIC/GTIA would support more bandwidth without modification to work outside of hblank -which MARIA did do, and with peak bandwidth enabled by using burst DMA rather than interleaving)

 

 

 

 

 

 

 

 

OK, my mistake on the C64, but the rest still applies. Interleaving a la apple/coco can be inefficient, and the Atari is a great counter-example . . . what would you rather have, a 1.79 MHz CPU with wait states making it ~1.2 MHz, or a 1 MHz CPU without contention?

 

I think I would take the 1.2, with the video system operating transparent to the CPU, able to fetch a lot of data per scan(more than the current arrangement), no timing issues, VCS style :)

Yes, but you assume there's fast enough RAM available to do that at low cost. The VCS didn't have to deal with DRAM at all, just SRAM and ROM, and no interleaved DMA. (no DMA at all iirc -I don't have a great understanding of TIA, but I was under the impression that the CPU manually built up the display into TIA, sort of like ANTIC does for GTIA, but with more limited sprites and a much simpler playfield)

 

Had they done that for the A8, you'd have 1.6 MHz (1.79 MHz with SRAM) without contention, but a ton of time spent building the display.

 

And even if there WERE cheap RAM fast enough, you'd need ANTIC/GTIA to be capable of using the added bandwidth too. (but then you could also have even more bandwidth with burst DMA and contention -or support modes for both, but that's adding more to cost)

 

 

I wonder if Sylvia was going to use interleaved DMA, it did use SRAM and a 1.19 MHz clock and was apparently sporting some enhancements in STIA over GTIA along with FRANTIC (which was apparently a normal ANTIC), though with SRAM at the time, they probably could have doen interleaved DMA at 1.79 MHz. (unless they were aiming at even more bandwidth for video, like 2 video accesses interleaved within each CPU access -though any hits to ROM would likely be much slower, perhaps too slow to even interleave anything)

MARIA could have used such interleaving, but I think it was fast enough to use burst DMA to its full potential (ie much faster than interleaving could allow) . . . except that they could have done both and allowed interleaved accesses while the CPU worked as well as burst accesses that halted the CPU like the Amiga could do. ;) (unless they used a 3.58 MHz CPU that could also saturate the bus -with wait states to access ROM . . . or in the Amiga's case, you'd have needed a 14.3 MHz 68000 to saturate the bus)

 

There's also the ever present R&D cost/time issues limiting things. (plus the GCC guys were brand new to LSI design, so that was a factor . . . and apparently not getting any assistance or collaboration from Atari Inc engineers whatsoever -an Atari redesigned embedded POKEY would have been useful for one thing -ie POKEY with just the sound portions retained and put in a 20 pin package -or less if you cut out more features, like dropping IRQ -which is rather hard to use with the 7800- and R/W to make it write only for an 18-pin package)

Or an embedded DRAM interface for that matter. (use 2 cheap 16kx4-bit DRAM chips like the 600XL was using, especially useful if they weren't going to bother with any interleaving)

Edited by kool kitty89
Link to comment
Share on other sites

@kool_kitty: The CoCo ran interleaved at somewhere around 1.7Mhz with a 6809.

CoCo I and II were fixed at .89 MHz in DRAM due to the limits of DRAM and (later) the SAM's capabilities. The CoCo III ran at 1.79 MHz interleaved, but that was in 1986. (probably using similar speed DRAM as the Amiga, which ran interleaved at a similar speed -the 68k just takes 4x as many clock cycles to complete an access than a 6502/6800/6809 and the Amiga's DMA set-up basically split a 3.58 MHz bus into 2 1.79 MHz buses for the CPU/chips to use, with the additional ability for the chips to steal 100% of the bus for burst DMA at full bandwidth)

Actually, the CoCo I/II could run at 1.7MHz when accessing ROM. They could do the same with RAM but it required disabling RAM and video refresh. Someone has actually written a demo that animates more on screen objects by using the mode that disables video/RAM refresh for a fixed number of cycles. If you wrote a game for a ROM pack you should be able to run it at 1.7MHz except for RAM access, but I don't think anyone ever did that.

 

I think the GIME in the CoCo 3

 

 

The question is whether cranking up speeds would have been possible in the first place using the same memory . . . or, rather, how fast interleaved accesses could be done. (if the Atari would have had to be cut to 1 MHz, that's not a good alternative . . . the C64 was a bit of a waste though, only 1 MHz and in 1982 -it should have either been interleaved or 2 MHz with wait states)

It's not a question of if it were possible but how much it would cost. Does it make sense to increase the cost of a machine like that by 50% to 100%?

In the case of the Motorola chipset that could have been the CoCo upgrade, you could have over 1MB of RAM and it might make sense to run at 3.5MHz because it could compete with newer systems.

But the CoCo had a simple design that lended itself to advanced upgrades. Can you imagine updating all the Atari video modes to accept different CPU clocks?

 

 

Looking at what was done with the CoCo 3 is intriguing, because it fundamentally can do what the older computer did, while adding very significant new graphics capability. The CoCo 3 was a nice jump up. So was the //e, with double high-res, basically doubling the video path, without impacting anything.

The CoCo3 was a massive jump: double CPU speed full time, added time facilities (hblank counter), hardware V/H scroll registers (take that ST and Apple IIGS :P ), 6-bit RGB palette with up to 16 indexed colors (4-bit packed pixels, take that again, ST :P) with numerous added resolutions, etc.

The upgrade was pretty strait forward on the CoCo. The original offered graphics modes where you doubled the resolution horizontally and vertically with different settings. They just extended the timers to support more resolutions and added more colors on screen as well as palette registers. It was just a matter of new timing on the pixel generator circuit and a more flexible color lookup in the D/A output (registers rather than hard values).

The hardware scroll was a more major deviation from the original design.

 

(just about the only thing lacking was the sound hardware . . . if they'd just added a bog standard SN -or preferable AY/YM- PSG, that would have opened sound up nicely -with the DAC used occasionally for drums/SFX/etc -actually the YM2413 might have been a good option if it was ready in 1986)

I don't think OPL was affordable if it was available at that time. There was an upgraded AY chip that had more control over the individual channels while maintaining backwards compatibility. Not sure what # it is.

 

At least the CoCo 3 got the programmable timer that can be used to drive the DAC at an acceptable rate. To play music in the most efficient manner generally requires large samples with all the music rather than having to take time and mix smaller samples. But they should have at least included the 8 bit stereo DACs of the Orchestra 90C.

 

The CoCo 3 got the sort of upgrade the ZX Spectrum could have used to become a truly persistent/evolutionary standard. ;) (had the 128k added that sort of video and CPU upgrade on top of the AY chip, it might have made for real competition against the 16-bit computers ;) -the later SAM Coupe obviously came too late and was still too weak with lack of hardware scrolling)

It's sad it didn't at least get the new modes from the TS2068.

 

 

Yes, aside from the Apple II, none really had good expandability (the Atari would have if the engineers had their way, and almost did again with the 1090XL . . . ), and most lacked any sort of general evolutionary development. (albeit the ST and Amiga seriously suffered from both of those problems as well . . . the Amiga got some boosts in expandability and both got a wider range of form factors, but it was mostly just an increase in RAM and minor tweaks to the graphics end for most of their lives -the ST Blitter was rather significant, but limited to high end, Amiga fastRAM was notable, but both lacked faster CPU models for a long time and only the ST ever got a faster 68000 version with the MSTE -none had a faster low-end model though,

The Amiga got the 1200 as the low end machine which had the 16MHz 68EC020. To be honest, that should have come several years earlier though.

 

 

Hmm, maybe they could have done something else besides full double buffering, maybe use dual ported SRAM on-chip so the back end of the line buffer could be filled as the active part was still being scanned out to the screen. (you'd need precise timing to ensure that you didn't overwrite something that had yet to be spit out to the screen ;))

<snip>

I worked on an embedded system that used dual port RAM in the late 80s. It was expensive stuff back then so it would have to be a pretty limited size.

I've often thought about using dual port RAM to make a replica of some old machines... it's not so expensive now.

 

A trick you didn't mention is to double the data memory buss width and to cache the 2nd byte(s) for the next read internally on the custom chip. For an 8 bit machine it's a practical option for cutting display memory reads in half.

CPU reads are less predictable so you either need cache or wait states when there is a collision.

Link to comment
Share on other sites

Since the TRS-80 Model II was mentioned I thought I'd look up some prices on Tandy machines.

 

In July '82 a mail order place called Computer Plus has the following prices listed.

Keep in mind those are discounted from Tandy's normal prices.

 

Model III 16K $799

Model III 48K 2 drives & RS232 $1949

Model II 64K $3100

Model 16 128K 1 Drive (68000 machine) $4299

Model 16 128K 2 Drive $4999

A CoCo 1 with 32K and Extended Color Basic was $499

 

In reality, most (all?) of those 32K CoCo machines had 64K. They supposedly used half defective 64K chips in some but in my experience, there is no such thing so I'm guessing that was all talk because they didn't want it to compete with the Model III. Plus, I think it's more expensive to find half defective 64K chips than to buy good ones.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...