Jump to content
IGNORED

"Who Killed the Atari ST?" 1989 article on Atari computers


pacman000

Recommended Posts

Is it possible to install Windows 10 without the internet and never ever put it on the internet? No one, not even Microsoft seems to be able to answer this question. And when they do it's conflicting information.

 

Go to Network Settings and put it on Metered Connection, that'll keep it from installing updates w/o your say-so. But I would recommend you use as 3rd part replacement for Windows Defender since you won't get updates for that.

Link to comment
Share on other sites

Is it possible to install Windows 10 without the internet and never ever put it on the internet? No one, not even Microsoft seems to be able to answer this question. And when they do it's conflicting information.

 

I don't think it's possible to install a home version without internet access, but you can run it without once it's installed. I have no doubt there will be commercial distributions of it available to organisations that you can install without an internet connection to make rolling out 100's of machines under the same site license easy, but they're unavailable (legitimately) for home use for obvious reasons.

Link to comment
Share on other sites

  • 1 month later...

Prior to Apple Macintosh, Atari ST, Commodore Amiga, and PC's, there were numerous CP/M systems with a very popular word processor called WordStar. Other than S-100 bus based CP/M systems, these weren't compatible, but they sold well to businesses and also hobbyists. The only popular CP/M game I recall in those days was the text game Colossal Cave Adventure. When PC's game out, there was an Ascii art version of Rogue.

 

For the Atari ST, the Alcyon C compiler (it included an assembler as well) was probably the best, in terms of producing reasonably efficient code. It was included in the early ST development kits.

 

Although the Macintosh was popular, the programming environment was stuck with a backwards compatible mode where programs were split up into 32K chunks, using the 68000's 16 relative addressing modes. Programs had to use "handles", which were pointers to pointers in the first 32K segment, so the Mac could move allocated chunks of memory around, which could happen with most of the system calls, requiring a program to reload any local pointers from the handles. The toolset, MPW (Macintosh Programmer's Workshop), was text based (it brought up a console window in order to run it). While Microsoft toolsets evolved into GUI based toolsets, MPW interface remained the same for about 5 to 7 years, although other development toolsets like Think C and Prototyper (programmer "drew" the user interface, Prototyper generated the code), were released.

 

Atari ST was smart in essentially porting Digital Researches version of MSDOS (Gem) to a flat 32 bit model, but never ended up with the popular apps from PC's, other than Word Perfect. I don't recall how popular WordStar was by the time the Atari ST was first released.

  • Like 3
Link to comment
Share on other sites

Interesting read, rcgldr .It just confirms how things with 'high level language' C were in early stage, especially on systems with 68000 CPU family.

Considering DR, they never produced some successful programming package, at least not for Atari. Starting with poor ST Basic, RSC Editor ... For me, that company is just part of Atari's fall in later years.

Did Atari port Alcyon for ST, or it was DR self ? Or rather, it was co-work. :)

Lets just look TOS. They had huge problems to fit it in 192 KB ROM space. Most of it is coded in C. Compiler is inefficient, especially for today standards. Why they did not go in more ASM parts ? That could save some KBs. Even in 1989, so about 5 years from TOS development start, it is not good considering code efficiency. I say that DR was just too slow. Too comfortable.

I was able to shrink TOS 1.04 code size for 6 KB, only by using simple optimization - short address form whenever possible (bottom and top 32KB of address space) - and HW addresses are made with it in mind.

With more optimizations, could save couple KBs more. What is actually done in KAOS TOS, only 1 year later (1990) - and little later same team realized MagiC. Then, I saved 10 KB with simple packing of something what DR/Atari just copied straight from ROM to RAM. Really sad to see, that they used Line-F emulation for AES, instead that simple solution. Because they were short with ROM space for TOS 1.04 , for about 7 KB .

Conclusion: it was not murder, it was suicide.

Link to comment
Share on other sites

Atari ST was smart in essentially porting Digital Researches version of MSDOS (Gem) to a flat 32 bit model, but never ended up with the popular apps from PC's, other than Word Perfect. I don't recall how popular WordStar was by the time the Atari ST was first released.

 

If anyone used WordStar on the ST, it was through the PC-Ditto emulator. It was very slow but then again WPs spend most of thier time waiting for user input anyway...

 

The ST's ability to read & write PC formatted floppy disks was a very useful bonus, it sure helped me out back in college.

  • Like 1
Link to comment
Share on other sites

 

Although the Macintosh was popular, the programming environment was stuck with a backwards compatible mode where programs were split up into 32K chunks, using the 68000's 16 relative addressing modes....

 

... Programs had to use "handles", which were pointers to pointers in the first 32K segment, so the Mac could move allocated chunks of memory around, which could happen with most of the system calls, requiring a program to reload any local pointers from the handles.

 

... Atari ST was smart in essentially porting Digital Researches version of MSDOS (Gem) to a flat 32 bit model, but never ended up with the popular apps from PC's, other than Word Perfect. I don't recall how popular WordStar was by the time the Atari ST was first released.

 

There was nothing "backwards compatible" with the 32K decisions made by Apple in the Mac runtime. Mac apps were encouraged to keep their heap blocks relocatable as much of the time as possible, to minimize heap fragmentation. 16-bit PC-relative operations were both nicely position-independent as well as 2 bytes smaller than instructions that used absolute addresses; smaller code was very, very good on the Mac.

 

Handles were just non-relocatable blocks (anywhere in memory) containing pointers that the memory manager maintained to relocatable blocks as they were shuffled around. Apps would often call MoreHandles() as part of their startup, so that the non-relocatable handle blocks would be located near the bottom of the heap, again to reduce fragmentation.

 

GemDOS doesn't even have a heap. It just gives the launched program "the rest of memory" to do with as it will. The GemDOS loader and various simple tools can generate relocation information that allows absolute addresses to be fixed up. It's not clever, it's just about the simplest thing you can do and still be able to load a file off of disk and run it.

 

(When I worked on MPW, I ran a little project that effectively removed the 32K limits on code and data for the 68K Macs. By then the increase in code size didn't matter as much, since most systems were selling with 4MB or more of RAM).

Link to comment
Share on other sites

 

There was nothing "backwards compatible" with the 32K decisions made by Apple in the Mac runtime. Mac apps were encouraged to keep their heap blocks relocatable as much of the time as possible, to minimize heap fragmentation. 16-bit PC-relative operations were both nicely position-independent as well as 2 bytes smaller than instructions that used absolute addresses; smaller code was very, very good on the Mac.

 

Handles were just non-relocatable blocks (anywhere in memory) containing pointers that the memory manager maintained to relocatable blocks as they were shuffled around. Apps would often call MoreHandles() as part of their startup, so that the non-relocatable handle blocks would be located near the bottom of the heap, again to reduce fragmentation.

 

GemDOS doesn't even have a heap. It just gives the launched program "the rest of memory" to do with as it will. The GemDOS loader and various simple tools can generate relocation information that allows absolute addresses to be fixed up. It's not clever, it's just about the simplest thing you can do and still be able to load a file off of disk and run it.

 

(When I worked on MPW, I ran a little project that effectively removed the 32K limits on code and data for the 68K Macs. By then the increase in code size didn't matter as much, since most systems were selling with 4MB or more of RAM).

 

What I meant by backwards compatible was related to an environment originally limited to 128K on the early Macs. It seems pointless on a system with 1MB of ram and very few, if any, instances of applications that ran 24/7. All of the handles had to be located in the first 32K chunk. Many of the calls to the system could result in the handles being updated, requiring the programmer to update any local pointers via the handles again. The other issue was having to manually edit link scripts to move modules around as needed to fit in 32K chunks. Think C did this automatically. A friend of mine used a new version of MPW about 5 years after I used it, so I loaned him my books, thinking these would not be much help, but we were shocked by how little had changed in MPW over during those 5 years (compared to Microsoft's transition into Visual C/C++ versions).

 

Other annoyances with the early Macs was the lack of DMA, something that the original PC and most prior CP/M systems had. Programs had to use "blind transfer" mode where polling was used for the first byte of a 512 byte transfer, followed by a hard loop to transfer the remaining 511 bytes, controlled by a hardware delay. There were third party SCSI cards that offered DMA, so I don't understand why the Macs were so late in including DMA. Another annoyance was the continuing promise of a pre-emptive multi-threading kernel, such as OS/2 or Windows NT. I recall the promise that pre-emption would be in version 5, then 6, then 7, ..., and it wasn't until OS-X that it was implemented (there was AIX, but it wasn't popular). The combination of a price increase for Macs in late 1989 and the later releases of Windows 3.0 and 3.1 brought the Macs down from around 25% market share to 5% market share, but to be fair, once 386 EISA clones were released, the top 20 companies accounted for only 50% of market share, with the other 50% being mom and pop operations building 386 PC compatibles from common components.

Edited by rcgldr
Link to comment
Share on other sites

Relocatable code is very interesting - my first serious SW, what I wrote for competition in my small country in 1984 was MC Tracer for ZX Spectrum, and as such, working in any RAM area was normal requirement.

I did not like how it was done in MONS (tracer, disassembler for Spectrum) , so I invented something different - what was basically same as what TOS uses :) - even if it was pretty much different CPU ,

 

Now, I can understand why early MACs with small RAM forced coders to do everything possible to shrink code size, so used diverse 'tricks' ..

But for me is much more interesting TOS - trick to shrink TOS ROM code was Line-F . Basically subrutine call with only 2 byte size, what could include even parameters. That was present in all 192 KB sized TOS versions - up to 1.04. In 1.06, what was in 256 KB ROM no more Line-F . 1.04 without Line-F was too long for some 6-8 KB. Sad thing is that they could save even more with some really simple solutions.

And even better compiler, or just some extra hand work could save some 6 KB - here I mean short addressing (for low RAM and HW registers) - was it possible to specify in Alcyon compiler at all ? In year 1989 .

Link to comment
Share on other sites

I don't know how much of Atari TOS was written in C versus written in assembly and/or if the Alcyon compiler produced assembly code which then got assembled. I recall a multi-pass 68000 assembler than kept trying to change branches to short format, but I don't know if the Alcyon tool set's assembler did that. In assembly code, you could suffix a branch with ".s" to specify the short format.

 

Back to the Mac programs having to deal with legacy mode, another issue was that programs were split up into two parts, a program part and a resource part. The resource part could be loaded in bits to save space, and also updated (written), which I didn't quite get. For most other environments, the writable stuff related to a program (default overrides like directory locations) is kept in a separate file. Once Macs started having 1MB or more or space, they could have had an alternate flat address space type program. I don't recall if the Mac operating systems included virtual memory support for the ones that had MMU's. I assume AIX, which was Unix with a graphical interface for Macs, supported all of this stuff, and it was released long before OS X.

 

One historical toolset for the Mac was Prototyper, which allowed a programmer to use drag and drop tools to design a user interface, after which Prototype would generate the code to support the user interface, while the programmer then added code to what Prototype generated. Specially formatted comments were used to separate what was generated by Prototype and what was added by the programmer. Visual Basic for Windows was similar, and the engineers at a few computer peripheral companies I worked for used Visual Basic to generate the code for interactive graphs, diagrams, text boxes, ..., to work with computer peripherals during the design phase. It was a quick way to create diagnostic or development type programs.

Edited by rcgldr
Link to comment
Share on other sites

Most of is coded in C - I guess about 80% . In ASM - what was assembled with Alcyon compiler too, as I know, which was capable for it too - .were done mostly early machine init, BIOS, XBIOS .

There is lot of C-generated code where bra.s is, so I'm sure that compiler was able for that optimization. But not for short 32-bit addresses. Or at least it was not set so. Someone could check all it. I'm not C guy, and really no tome for everything in life. Things were better with TOS 2.06 . (1992) .

Edited by ParanoidLittleMan
Link to comment
Share on other sites

Most of is coded in C - I guess about 80% . In ASM - what was assembled with Alcyon compiler too, as I know, which was capable for it too - .were done mostly early machine init, BIOS, XBIOS .

There is lot of C-generated code where bra.s is, so I'm sure that compiler was able for that optimization. But not for short 32-bit addresses. Or at least it was not set so. Someone could check all it. I'm not C gay, and really no tome for everything in life. Things were better with TOS 2.06 . (1992) .

OK, I think I get your point. With the 68000, PC relative addresses only need 16 bit offsets. Compilers that supported this would locate some of the function scope static data between functions in order to use PC relative addressing for those variables. (This wouldn't help much for global static data, since multiple functions would need access.) Local variables aren't an issue since they are stack relative, so it's only the static variables. I don' t know if Alcyon C compiler supported this. For the Mac, handles and other forms of global data had to be located in the first 32K chunk of a program. I don't know if load files contained a table of code references to the global variables in the first 32K chunk in order to fix up the addresses to allow for loader relocated code (since this part would not be self-relocatable). I think there was a way to use 64K chunks on a Mac taking advantage of the fact that the PC 16 bit offsets were signed and could be negative.

 

I seem to recall an assembler where a directive was needed between functions to provide space for near range access to static variable from adjacent functions, but I don't recall if this was a 68000 assembler, much less one related to Atari or Mac.

Edited by rcgldr
Link to comment
Share on other sites

Many people fail to remember just how few decent ST software titles were released for the NTSC market, which made it very difficult to treat it as a decent gaming machine. A few companies, like Sierra, ported their games over from 85 to 88, but this paled in comparison to the humongous PC market, and I did not want another niche system (in my teens) like the A8 had turned out to be. And even here in Canada, where the ST was quite successful, it meant that I would be buying Michtron and Antic catalogue titles instead of hot new games from EA. In addition, none of the North American ST magazines did well (or lasted past 91), and were always filled with depressing letter-writing campaigns to convince software houses to port games for the system: nothing in them suggested that anything but the ST was a system on life support throughout its existence. I remember buying Barbarian in 89 for the PC: it was a terrible port, but at least I could FIND it! In the pre-graphical browser days it was very difficult to find out what was going on in Europe; if I knew, I probably would have bought a ST (and kept my A8 going as a working system after 92).

We had tons of gaming software for the ST and imported what we could not get here in the usa, it was mainly a gaming platform for us that also could run PC or mac with the right hardware or software. My company purposely stopped carrying EA on ALL platforms and tossed the EA rep out as a result of their lack of support for Atari.

  • Like 1
Link to comment
Share on other sites

 

Very interesting point... PC's running MS operating systems was all about confomity because they were targeted towards businesses rather than computer hobbyists. They later entered the mainstream when they had "multimedia" abilities along with getting on the Internet. The Atari ST was one of many platforms that appealed to non-conformists but ultimately was no longer supported by the parent company. Apple is the only excpetion still around of course but they never marketed towards PC users anyway.

 

I've read an article years ago that stated although PC hardware is standardized, it can still run alternative operating systems like Linux and even ST sorftware through emulation. It helped me transition over the the PC platform without being 'completely' stuck with Windows (which I used for games only at the time).

Didnt M$ loan or give apple some money at a low point (1997), or there would be no Apple since they were on the verge of bankruptcy. Wouldnt it be great if that had not happened?

Link to comment
Share on other sites

Didnt M$ loan or give apple some money at a low point (1997), or there would be no Apple since they were on the verge of bankruptcy. Wouldnt it be great if that had not happened?

It was Bill Gates who bought some Apple shares - invested some millions in that time. But your latest statement is really something what is not only not nice, bur lacks logic too.

I don't think that Apple would vanish without that 'help' . And what is your problem with Apple ? I don't like them, I never bought anything, still they have right for exist.

 

People just thinking in very wrong way: equalizing PC with Windows . No, in many things they are opposite. Success of PC is in opened architecture, competition between manufacturers.

On the other side, M$ made huge monopol with Windows in OS market.

Who to blame ? Surely, there were some even illegal moves from M$ - they were even fined from EU, with 500 M Euros. But main reason are users of home computers + need for "standard SW"

PC is some kind of standard HW - so all motherboards need to be compatible. Same thing with SW would be greatest computer related thing what can happen on this Planet. But that's much harder thing.

Imagine established OS basic functions like for display, storage, network etc. That would allow compatible SW to run on every OS, what supports those (very numerous now) functions .

OS-es from diverse companies would differ in Desktop, efficiency, possible some extra stuff - mostly supporting SW, not mandatory to run 'standard' SW. And in name, price, of course ,

Link to comment
Share on other sites

 

There was a version of Microsoft Write for the ST but it was based on an old version for the Mac and required GDOS for the fonts. And we all know how well Atari Corp. sells things...

 

Thankfully there was Mac emulaton which was how most ST's got sold in the late 80's.

 

And GDOS never made it into later TOS ROM upgrades...one of many broken promises....

Link to comment
Share on other sites

Based on what I've seen people do in the past, I think it's possible too.

 

I can see video cards being difficult.

 

However, things like audio interfaces, input devices, printers, scanners, etc... should be doable.

 

I still recall a printer driver I needed when the color Deskjet printers first arrived on the scene. Some dude in Germany wrote one and posted it on an FTP server. It worked flawlessly.

 

There is such software on the Mac...comes in handy for printers that never receive official drivers each time macOS is upgraded...

Edited by Lynxpro
Link to comment
Share on other sites

Great to see here man who worked in Atari those years :-D

landondyer: really want to talk with you about TOS related things - I spent months in improving it. Please check PM .

 

Wait, are you the one behind this, Paranoid?

 

http://atari.8bitchip.info/tos105.html

 

 

You know what I'd like to see in a TOS hack/improvement? GDOS added to ROM like how Atari Corp promised us and should've delivered by the time of TOS 1.02. Leonard Tramiel over in the Atari Museum group on Facebook seemed to indicate ROM chip prices weren't the reason why Atari Corp didn't go ahead and add GDOS to the later ROM revisions...

 

Another little nice "Real Atari" thing I'd like to see added to TOS would be native ATASCII support. Adding ATASCII would've been an easy little bone to throw to the A8 crowd. It wouldn't have been much but anything would've been better than how Corp handled A8 enthusiasts and their inability to get a sizable portion of them to upgrade to the ST platform.

 

Hell, since Atari Corp did their own "proprietary" ASCI DMA port for hard drives, and later the Enhanced Joystick Ports, they probably should've just created an SIO2 for the ST platform as well for other devices. Then again, they received so little support - and/or offered so little support - for the Mega ST's MegaBus expansion slot, and the VME slots on the Mega STe and TT also had very little support going for them...

 

Speaking of SIO, Joe Decuir in the A8 Facebook group said he didn't remember anyone bringing up an enhanced SIO port amongst the features the early Amiga team considered for the Lorraine chipset design. I thought that was rather interesting since Mr. Decuir revisited the subject later in his career with Firewire and USB 2.0 [don't bring up USB 1.1 to him; he's not a fan even if he's the godfather/grandfather of it].

Link to comment
Share on other sites

Well, I think that there was pretty big confusion in Atari about what to implement in new models.

Considering GDOS in ROM: in beginning there was no space for it. Actually, in beginning there was no enough space for TOS self, as we know 1.00-1.04 . See above why, and how is solved.

There was serial port in all models, surely slow. But I don't think that some faster would be real solution. Time of LAN came, and they made something like it in TT, Mega STE.

Expansion slots: that's the part what was done very bad, in my opinion. First models: nothing of it.

Mega ST: new type of slot, not compatible with anything.

TT, Mega STE: VME slot - for not cheap and rare expansions, no driver SW ...

Falcon: again new slot type.

Sorry, but Amiga did it much better.

Link to comment
Share on other sites

Well, I think that there was pretty big confusion in Atari about what to implement in new models.

Considering GDOS in ROM: in beginning there was no space for it. Actually, in beginning there was no enough space for TOS self, as we know 1.00-1.04 . See above why, and how is solved.

There was serial port in all models, surely slow. But I don't think that some faster would be real solution. Time of LAN came, and they made something like it in TT, Mega STE.

Expansion slots: that's the part what was done very bad, in my opinion. First models: nothing of it.

Mega ST: new type of slot, not compatible with anything.

TT, Mega STE: VME slot - for not cheap and rare expansions, no driver SW ...

Falcon: again new slot type.

Sorry, but Amiga did it much better.

 

True about GDOS originally. That and DRI hadn't finished it. But like with the infamous Blitter chip, Atari Corp promised GDOS in future TOS versions - along with easy Blitter upgrades for the 520STm and 1040STf - once it became available and they failed to deliver on both in reality. My comment concerning Leonard Tramiel's comments was that Atari Corp didn't put GDOS later into the TOS ROM upgrades not due to increased ROM chip prices for higher capacity ROMs, but for some other reason.

 

I think in my own fantasy timeline, Atari Corp pivoted to OS-9 and welded GEM and GDOS to run atop it. Then they would've had rock solid multi-tasking/multi-user support in the OS but also had a great GUI and graphics/font support. But I digress...

  • Like 1
Link to comment
Share on other sites

Let see something, what was barely mentioned here: HW flaws .

Actually, there is not much of them, but in my opinion there is one huge flaw: lack of CPU bus + some other things expansion bus/connector, compatible across many models .

Above sounds pretty awkward, but I needed exact formulation of what it should do . Explanation below:

Atari was on that no need for it - there is cartridge port, there is DMA, (ACSI port) . Yes, that was good for some things like laser printer in that time. Cartridge port was used for diverse things, like video digitizer - not because port was really good for it, but because there was no better one ! I made EPROM programmer for ST in 1987, for cartridge port, and that needed some tricks to work - because cartridge port does not support write, only read. Short sighting design, that's it ...

What about users who want to expand RAM, update TOS ? Those 2 things are elementary - I know it for sure, since I made those upgrades for many Atari users. We could talk here about some other upgrades like video cards and like, but let say that it was told that Atari ST is closed system. But should it be closed for RAM, ROM updates ? I think that this mistake costed Atari at least million buyers.

Because this trivial upgrades people needed bringing computer to service and pay all work about opening machine, soldering there, adding PCBs ...

 

What should be done, and what it would cost ? First to say that I did not discover hot water. Such expansions were present in many micros of that time. So, on expansion slot should be CPU bus, 5V power, some interrupt signals (for disks) and because ST has special RAM circuit DRAM lines and signals. Optionally even some video signals. And all it with added extra lines for future models (reserved pins) .

With it, people could add in minutes RAM expansions, new TOS versions. And other things - like EPROM programmer without tricky write.

The price of such port ? Including all, max some 20 bucks . The price of not putting it in design, sold machines: Hundreds of millions lost because lower sales.

And that they knew that expansion slot is necessary proof is Mega ST - well, it's slot includes not all signals I mentioned. RAM and ROM upgrade was still on services. Or very skilled users + hours of work.

And then - TT, Mega STE - they went on VME bus ... No comment. Falcon - again new expansion bus. That's just not good way to keep people at companies products. Amiga did it better. PC did it better. You always can use little older cards in some PC - like PCI ones in latest mainboards, where there are mostly later PCI-E slots . Because today football World Championship is starting, I will say:

Atari did not get goal, they gave self auto-goal :twisted:

Link to comment
Share on other sites

Let see something, what was barely mentioned here: HW flaws .

Actually, there is not much of them, but in my opinion there is one huge flaw: lack of CPU bus + some other things expansion bus/connector, compatible across many models .

Above sounds pretty awkward, but I needed exact formulation of what it should do . Explanation below:

Atari was on that no need for it - there is cartridge port, there is DMA, (ACSI port) . Yes, that was good for some things like laser printer in that time. Cartridge port was used for diverse things, like video digitizer - not because port was really good for it, but because there was no better one ! I made EPROM programmer for ST in 1987, for cartridge port, and that needed some tricks to work - because cartridge port does not support write, only read. Short sighting design, that's it ...

What about users who want to expand RAM, update TOS ? Those 2 things are elementary - I know it for sure, since I made those upgrades for many Atari users. We could talk here about some other upgrades like video cards and like, but let say that it was told that Atari ST is closed system. But should it be closed for RAM, ROM updates ? I think that this mistake costed Atari at least million buyers.

Because this trivial upgrades people needed bringing computer to service and pay all work about opening machine, soldering there, adding PCBs ...

 

What should be done, and what it would cost ? First to say that I did not discover hot water. Such expansions were present in many micros of that time. So, on expansion slot should be CPU bus, 5V power, some interrupt signals (for disks) and because ST has special RAM circuit DRAM lines and signals. Optionally even some video signals. And all it with added extra lines for future models (reserved pins) .

With it, people could add in minutes RAM expansions, new TOS versions. And other things - like EPROM programmer without tricky write.

The price of such port ? Including all, max some 20 bucks . The price of not putting it in design, sold machines: Hundreds of millions lost because lower sales.

And that they knew that expansion slot is necessary proof is Mega ST - well, it's slot includes not all signals I mentioned. RAM and ROM upgrade was still on services. Or very skilled users + hours of work.

And then - TT, Mega STE - they went on VME bus ... No comment. Falcon - again new expansion bus. That's just not good way to keep people at companies products. Amiga did it better. PC did it better. You always can use little older cards in some PC - like PCI ones in latest mainboards, where there are mostly later PCI-E slots . Because today football World Championship is starting, I will say:

Atari did not get goal, they gave self auto-goal :twisted:

 

A lot of that probably had to do with Jack Tramiel getting Shiraz to design a closed 16/32-bit computer as cheap as possible and Shiraz had designed the C64 and it was essentially a closed system. It certainly wasn't RAM expansion friendly as the Atari 8-bits were, even though it did ultimately support external memory expansion. [but that option was not popular and barely supported]. The ST's code name was "Rock Bottom Price" originally. And even then, Shiraz was open to including Atari Inc's AMY sound chip - which probably would've been more expensive than the YM2149 - in the design although they couldn't get it to work. I really need to look at the specs of Atari Inc's RAINBOW graphics chip compared to SHIFTER as to why it wasn't included in the ST, assuming the design and chip prototypes were still available - and known about - once the Tramiel bought Atari Inc's Consumer Division from Warner. Granted, Atari Inc wasn't going to use RAINBOW or AMY in the proposed "Mickey" game system built around the Amiga Lorraine chipset either, at least in their proto planning stages...

 

On second thought, RAINBOW must've been known to the Tramiels because Mr. Dyer had mentioned on his blog that a lot of what Atari Inc's Advanced Research Division had created in circa 1983-1984 weren't considered to be cost-effective even going into 1985 and aiming for a third or fourth quarter 1985 release.

 

By the way, one way to diffuse an Amigan hurling the assertion at the ST that it's a "Commodore computer at its heart" as opposed to the Amiga being the "true Atari computer" is to mention Mr. Dyer's contributions to the ST as well as other former Atari Inc staff who worked on the project. Without them, the ST's version of GEM, basically its personality, would be as flat and boring as DRI's x86 versions.

Edited by Lynxpro
  • Like 1
Link to comment
Share on other sites

I don't think that Shiraz dealt with PCB design, or what type of connectors to use with ST. Because some expansion port is basically that. It's designed to be as you said: "as cheap as possible". But then, there are parts which could be cheaper, not losing any functionality: floppy controller - WDC1772 was not cheap in that time, not even years later - I know, I bought some in those years. And don't forger that YM is not only for sound, it has part in floppy port, parallel and serial port . So, it's multifunctionality is what prevailed, I'm sure. And that made problems too with SW, when they added new control lines to it in Mega STE, TT, Falcon. Would not compare ST with C-64, which had max RAM what it's CPU could address. No, for me no excuse for letting out expansion port for machine what costed about 1000 $ (when it was much stronger than later) .

 

Amiga people is of course wrong, they care only for one thing: to brag how Amiga is superior and like ...

  • Like 1
Link to comment
Share on other sites

I don't think that Shiraz dealt with PCB design, or what type of connectors to use with ST. Because some expansion port is basically that. It's designed to be as you said: "as cheap as possible". But then, there are parts which could be cheaper, not losing any functionality: floppy controller - WDC1772 was not cheap in that time, not even years later - I know, I bought some in those years. And don't forger that YM is not only for sound, it has part in floppy port, parallel and serial port . So, it's multifunctionality is what prevailed, I'm sure. And that made problems too with SW, when they added new control lines to it in Mega STE, TT, Falcon. Would not compare ST with C-64, which had max RAM what it's CPU could address. No, for me no excuse for letting out expansion port for machine what costed about 1000 $ (when it was much stronger than later) .

 

Amiga people is of course wrong, they care only for one thing: to brag how Amiga is superior and like ...

 

I think they used the Western Digital floppy controller chip in order to save time on getting the ST released as opposed to creating their own custom chip which probably would've had better results. When they started working on revisions, they should've designed a custom chip to handle floppy control as well as the non-audio duties the YM2149 had been assigned and then patched TOS. Of course, that probably would've broken compatibility with some games as each upgrade of TOS had a habit of doing due to the programmers using undocumented routines despite Atari Corp explicitly telling them not to use them. The ST's MMU was extremely limiting and should've been upgraded to at least match the maximum RAM the 68000 could address: 16MB; as opposed to 4MB. Apparently, the cost factor was the reason why both Atari Corp and Amiga chose to roll out their own custom MMUs instead of using Motorola's official wares.

 

I've found it interesting how Apple, Atari Corp, and Commodore all didn't choose to bankswitch RAM above 16MB for their 68000 computers as they had done with their prior 8-bit 65xx-based computer lines. Maybe the memory maps worked too differently.

Link to comment
Share on other sites

  • 2 weeks later...
...

GemDOS doesn't even have a heap. It just gives the launched program "the rest of memory" to do with as it will. The GemDOS loader and various simple tools can generate relocation information that allows absolute addresses to be fixed up. It's not clever, it's just about the simplest thing you can do and still be able to load a file off of disk and run it.

...

Well, that's actually too much even for someone critical toward TOS as myself Yes, GemDOS gives all available RAM in that block (what is normally all free RAM) to launched PRG. Really no need for something like heap - what is just solution from 16-bit addressing era. I never felt need for heap. But saying that only relocation is performed is not correct. There is more code for memory handling, and PRG can shrink allocated RAM block size - what is necessary when want to start another one from current. Allocate blocks whenever wants - what can be considered as heap. TOS needs to keep track of all it, and free all blocks which belong to PRG when it exits.

 

What early MACs used with their low RAM size, which needed to hold even part of OS is really not smart to implement in ST, with much more RAM and TOS in ROM. Much bigger problem from longer 6 byte jumps (in compare to 4 bytes of bra or jmp adr(pc) and data addressing) was using C instead assembler. That resulted in code not 5-10% longer, but 3x longer in many cases.

I don't know how much of early MAC SW was done in ASM, and I don't care. Much better is to stay then at Amiga - when compare with ST, since it was much more similar to ST.

I will say again: I see transition to C, 68000 and GUI as hardest part in creating TOS. Some were just faster in it.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...