Jump to content

kool kitty89

Members
  • Posts

    2,465
  • Joined

  • Last visited

1 Follower

About kool kitty89

  • Birthday 08/08/1989

Profile Information

  • Gender
    Male
  • Location
    San Jose, CA

Recent Profile Visitors

19,279 profile views

kool kitty89's Achievements

River Patroller

River Patroller (8/9)

191

Reputation

  1. Having the 64-bit DRAM soldered to the board would make sense (possibly dual-bank 32-bit feeding a 64-bit SHIFTER bus; the Falcon uses dual 16-bit banks and a 32-bit SHIFTER bus) and separate bank or separate bus of 16-bit or 32-bit DRAM for expansion, possibly with the feature allowing re-mapping that other DRAM as ST-RAM for 1/2/4 MB ST software compatibility (though much of that was written for TOS, so potentially could be handled at the OS level and not need full hardware compatibility or at least SHIFTER access to the full >1 MB area). A simpler, but faster DRAM controller using page-mode bursts on 16-bit DRAM with a 32-bit data bus latch for a 68020 or '030 would've been more cost effective and flexible, and could thus have used 8-bit (non-parity) 30-pin SIMMs in pairs the same as the STe uses. Granted, the TT SHIFTER itself should've been able to be fed via 16-bit DRAM using page mode and/or bank-interleave, and moreso if limited to half the bandwidth of actual TT modes (ie 640x200x4 and 320x200x8 at TV resolutions) if Atari had implemented a memory controller suitable for that later on. (sort of like the 16-bit TT bus, but earlier) There's a variety of configurations that could've used a 16 MHz 68020 or EC020 on a single, shared 16-bit bus with wait states and still been similar to or faster than the Amiga 1200 at least (if using bursts rather than interleaved DMA, the lower bandwidth video modes should've been faster, including the likely popular use of running existing ST productivity software, just faster) Use of DIP sockets would be OK for final assembly overseas, but not for assembly in the US or for end user (or licensed service center) expansion as it ran into mandatory price floor for DRAM chips imported from Japan into the US, where no such limitations applied to assembled memory modules on circuit boards. 72 pin 36-bit (32-bit without parity) SIMMs came out with later IBM PS/2 models some time in the late 1980s (after 1987), but were definitely available from Japanese and Korean DRAM manufacturers by 1989 (see 1989 Toshiba databook below). OKI had them by 1990 as well. https://archive.org/details/bitsavers_toshibadatSMemory_66723306 30 pin SIMMs were available earlier than that, but I'm not sure when they became cheap/available from most/all manufacturers, and notably Samsung had 30 pin 8/9 bit SIMMs available by 1988, which would mean both Korean and Japanese sources were available. I'm not sure how they were actually going to implement that mode, but a 512x512 bitmap would seem inefficient compared to using a 320x200 screen in virtual 512x512 space. Or perhaps a fixed 512x512 pixel map would be used, but not all of that need to be allocated, you'd just have to restrict what part of the screen was visible when rotating, and moreso if scaling was supported and you zoomed out. If using it for a rotated 3D floor/ground texture filling only 1/4 of the visible screen area (with normal bitmap or tilemap background above that), it seems like memory usage could be more conservative. That said, the fact they were interested in a rotating bitmap effect is interesting in as far as the potential incentive to add that feature another design with a blitter for drawing scaled/rotated bitmaps as an alternative to the Panther's sprite-scaling feature or to complement it. On that note, one way they could've made the Panther (and subsequent Jaguar) object processor logic more useful for a composite 3D or blitter rendered 2D scene would be to render objects to a framebuffer rather than a line buffer, thus working more like the Lynx's blitter, 3DO, or Sega Saturn VDP-1, except without the 2-point rotate or 4-point quad distortion feature, though that would be the simplest way to expand the Object Processor into a polygon architecture. However, you'd have the same disadvantage of forward texture mapping that the Saturn, 3DO, and Nvidia NV-1 architectures had (distortion on both gouraud shading and translucency effects, plus heavy overdraw on heavily warped quads or zoomed out objects) vs using more typical reverse texture mapping for rasterization like the Jaguar's rotate mode uses. (which, oddly enough, the scaling/rotation Gate Array in the Mega CD also uses along with a 16x16x4-bit stamp cache) Though for Wolfenstein 3D style ray-casting engines, a Panther style object processor rendering to framebuffer would probably work pretty well (one object per column and per sprite). The Jaguar's large line buffers also wouldn't have been so wasteful if they could be disabled for simple framebuffer scanning or for synchronous object scanning for composite framebuffer screen modes using much shorter line buffers swapped multiple times per line and using multiple, small object lists per line segment. Though in terms of cancelling the Panther itself, but coming up with something related to the Panther and/or Slipstream, but quicker/simpler than the Jaguar, modifying the Panther Object Processor to render to a framebuffer in 8-bit wide DRAM (using the same 8-bit wide pixel write port) and using 2 banks of 64kx8-bit DRAM for 320x200 (or 288x224, or 256x256) 8bpp 256 color framebuffers, writing to the back buffer while the active buffer was read by a simple VDC (possibly directly derived from the Slipstream display controller), and you'd now have a fairly flexible extension of the Panther architecture which could also have either software rendered (or blitter rendered) 3D or pseudo 3D composited in with the objects, though you'd need object list interrupts to draw polygonal stuff between consecutive objects, or just render polygon animation to object tiles fetched from shared DRAM, or just multiple rendering passes using different object lists. You could even use 16-bit DRAM with fast-page-mode fetches in 16 byte bursts for object list reads almost as fast as 32-bit SRAM (with 16 or 20 MHz Panther using 100 or 80 ns FPM DRAM and 3 tick RC cycles and 1 tick page-mode cycles), assuming the Flare II team could manage a fast enough DRAM controller suitable for those sorts of cycle times. (also the option of using DRAM with shorter precharge rating specific to most 64kx4-bit and especially 64kx16-bit DRAM chips available, and doing 2 tick RC cycle times at 16 MHz, or more conservatively closer to 15 MHz, ans using similar timing to the ST and STe MMU, but with page-mode burst functionality) Including the Flare DSP would probably be more important for assisting with 3D than the blitter, same for software texture mapping or rotation effects (if they didn't add that in hardware), plus having FIFOs feeding the PWM DAC registers (which the 1993 Slipstream doc notes were added) would allow more realistic dual purpose sound + coprocessing of the DSP and using lower sample rates to DAC writes without inducing PWM squeal. (granted, better still would be a DMA sound circuit to just fill the FIFOs automatically at a set timer frequency, to further free up the DSP and still do simple software mixed sound effects and/or music ... though the Ensoniq chips, even the older DOC chip would be great if Atari could get them cheap enough, and the DOC just needs 8-bit wide DRAM, so a single 64kx8-bit bank would be fine and a lot easier to get good/impressive sound output from more composers not on the super-talented chiptune composer + custom sound engine side and/or demoscene end of things) Atari might have also reconsidered using an embedded 65816 core and removing the cost of the 68000 entirely, especially seeing as Nitntendo went that direction, though that would mean switching to a different chip vendor since Toshiba lacked a 65816 license. (sticking with VLSI like they had with the Lynx chips would've been an option) The 65816 is crappy for running compiled code, but if it was a good deal faster than the SNES's, it should've been pretty decent, and much better still for 650x assembly language programmers. (the planned 8 MHz on-chip and 4 MHz external speed of the Super XE should've been plenty) Switching to cheaper 8-bit ROMs for bulk storage and working mostly from code/data in RAM should also have cut costs further. As far as manipulating panther lists, there doesn't seem like a big dsadvantage for a fast processor on an 8-bit bus with short instruction cycles vs a 16 MHz 68000 on a 16-bit bus. Plus if you did want to add an external, separate local CPU RAM, it would only need to be 8 bits wide and not 16. Plus single byte-wise software rendered pixels (like for software rendered 3D) would be just fine for an 8-bit bus. But aside from that, on the Panther architecture itself, I'd forgotten some things I'd made notes on years back and never discussed, and would be more relevant to something Atari could've quickly added in 1991 while trying to fix the (apparently) non functional or poor yields of the Panther chips as-is in early 1991. Run-length objects have to use 16 bits per run with 8 bits for color and 8 bits for run length (width), and also must use 16-bit memory or use only 16 bits per 32-bit longword in 32-bit SRAM (very inefficient). Those limitations of the Run Length object mode and inefficient use of RAM/ROM for just 32 colors, plus lack of a compact 8 bits per run format could be worked around by including a memory mapper chip that could let 8-bit wide RAM or ROM be seen as 16-bit RAM or ROM with the 8-bits unpacked to 5-bits per run and 3 bits per run length, so 32 colors with 1 to 8 pixel runs (really useful for a wide array of graphics). You could potentially also include an option for 4+4 bit mapping for 16 colors and 1 to 16 pixel run length and either mapping to bits 0 to 3 or 1 to 4 to use the upper or lower 16 colors of the palette. 16-bit wide ROM could be mapped as 2 banks of 8-bit ROM for this function when reading directly from ROM, or separate 8 and 16-bit ROM chips could be used on cart. Plus with only 8 (or 16) pixels max per run, you don't have to worry nearly as much about overdraw or clipping as you would with larger 1 to 256 pixel run format. (you could also just restrict max run length when encoding, but that also wastes the unused bits, where you just have the 3 or 4 bits to work with total here) The Panther memory map includes areas reserved for both 32-bit wide and 16-bit wide SRAM, so presumably the Object Processor would access both regions with the 2 clock tick ~124 ns cycle time (at 16.108 MHz). A such adding just a single 32kx8-bit 120 ns SRAM chip to the existing 32kB of 32-bit SRAM could be mapped as 16-bit SRAM, cycled as such, and used for the above 5+3 bit run-length format, or for normal 1/2/4 bit packed pixel objects if using the 4+4 bit mapping, just limited to 16 colors rather than the full 32 colors. This would be the bare minimum cheapest option to upgrade the Panther while keeping it very low cost. (using an 80 or 85 ns PSRAM chip with the Panther down-clocked closer to 15 MHz cycled at closer to the rated 135 ns of 85 ns PSRAM, and basic refresh logic could be used as an alternative to SRAM; refresh might only be needed for vblank or inactive screen lines as PSRAM rows could be mapped to auto-refresh from Panther list and data reads). Better would be 2x 32kx8bit SRAMs for 16 bits, useable for both the special case of the above 5+3 bit run-length data and as normal 16-bit wide SRAM for all other purposes (16-bit object data for 1/2/4 bit objects or even for 32 color 8-bit objects, probably mainly if you wanted a software rendered framebuffer window), but unlike 16-bit ROM, 16-bit SRAM would scan 8-bit pixel data at max Panther pixel output rate. (32-bit SRAM isn't needed for that, just for lists). You'd then have an additional 64kB of 0ws RAM for the 68000 to work in as well. Now, hypothetically, they could even have switched to just 8kB of 32-bit SRAM via 4 2kx8-bit chips, but I only see that as useful if they were planning ahead to later embed 8kB of SRAM inside a later Panther ASIC revision since several major suppliers didn't even offer SRAMs smaller than 8kB by 1990, or only offered them in special high-speed grades (like 55 ns or faster) which the Panther probably wasn't going to make use of. Though it would also mean using existing supplies of 7800 SRAMs (if they had any) and suppliers providing 2kx8-bit SRAMs for the 7800. I've never seen a 7800 revision that switched to just a single 8kx8-bit SRAM chip, so either Atari had an oversupply of 2kB chips or they were still legitimately cheaper to use. (even if only marginally so) In the worst case, Atari might have used a mix of 2k and 8k SRAMs (with only 8kB of 32-bit SRAM used in either case) until they could switch to embedded 8kB SRAM or possibly a custom, single 2kx32-bit SRAM chip. 8kB of 32-bit SRAM just for Panther lists would still allow a good deal, practically speaking, a maximum of 512 objects if every single byte of that 8kB was used for lists, and probably many fewer than that actually used for typical games and even for most atypical cases. You just wouldn't have the "2000 sprites" figure the Panther touted, but 512 sprites is still way more than anyone else at the time (including the Neo Geo) and not stretching reality as far as you'd need to actually have 2000 sprites on-screen. Even if you decide to double buffer object lists and/or reserve a good chunk of space for block-mode command lists, there should be more than enough to use within realistic limitations for actual games. As such 8kB of 32-bit SRAM + 64kB of 16-bit SRAM would seem considerably more useful overall, and the 16-bit area could potentially be replaced with PSRAM or even DRAM later on with upgrades to the chips. (granted, you'd most likely have to use 128 kB of DRAM with half of it wasted, unless re-implemented in a new console, like the Jaguar with backwards compatibility) Plus with RLE object data loaded into RAM, it can still be further compressed on cart, since RLE alone isn't that efficient of a compression format. Also the Panther SHIFTER chip seems to be the only part Atari could make with their in-house Styra semiconductor company, so the 2-chip solution might've been lower cost for that reason. Also, the transistor count of that SHIFTER chip was likely very close to that of a VGA RAMDAC as it was, though the pin count was higher than a basic 28-pin DIP style RAMDAC and bulk volumes may have made the latter cheaper. The 32 color limit isn't so bad if you can actually take advantage of it and make efficient use of ROM and RAM, as you would with a packed 8-bit RLE data format with 5-bit color + 3 bit run length. Plus with 32 global colors useable for most background and sprite objects, it shouldn't have been too hard to have better color usage than many Mega Drive games even without using palette swaps, and a more limited number of cases where the 18-bit RGB shows advantages over the SNES's 15-bit RGB. (you wouldn't have translucent blending effects like the SNES could do, though, and flicker or dithering would be needed instead) Plus, Atari potentially could've pitched the Panther object processor for Arcade use by Atari Games coupled with a more expensive 256 color SHIFTER mated to the Panther chip already outputting 8-bit pixel data (with just 5 bits being valid for the home console version). That said, with a similar transistor budget, it should've been reasonably possible to use 6-bit line buffers rather than 5 bit ones, plus 64 colors of CLUT, especially if you consider that dual-port SRAM can take up close to double the chip space of single port SRAM. so the existing 32x18-bit SRAM might have worked as 64kx18 bits, but without the ability to update the CLUT during active display. (though should still be possible strictly within hblank and vblank) If you assume all that, you'd have enough line buffer space for 266x6 bits instead of 320x5. Or 2x 288x6-bit lines with 64x14-bit CLUT. If you assume the CLUT SRAM takes exactly the same space as the line buffer SRAM cells, then you'd have enough for 2x 256x6-bit line buffers and 64x11 bits of CLUT (which could be 443 RGB, or probably better as 9-bit RGB colors with 4 shades each, or potentially 64 6-bit RGB colors x 32 shades output in 18-bit RGB space, though those sorts of color spaces are difficult if you're using entirely external resistors for the DACs, though fairly simple if you at least do the intensity step internally and output the same intensity to the external RGB outputs, in which case the 4-shade implementation would be much easier, albeit a simple 1-bit intensity that halves the output is simplest) Or, again, just use 11 bit RGB and still have better than the Mega Drive's 9-bit RGB. You could also do 2x 240x6-bit lines + 64x14-bit CLUT. (and not loose too much visible screen area compared to standard 5.369 MHz 256 pixel displays) I'm also not convinced the 6.44 MHz pixel clock for NTSC mode (cited/suggested in the Panther documentation) would've been that great of a pixel output in terms of composite video quality. Ideally, you'd want an integer number of chroma clocks and pixel clocks per line to minimize artifacting (and avoid pixel offset/jitter issues as would happen with a non-integer number of pixel clocks per line). Even better is when a relatively simple fraction or multiple of the chroma clock is used for the pixel clock. 7.15909 MHz would be an obvious easy choice (454 clocks per line = 227 chroma clocks = 15.7689 kHz h-sync, 456 clocks per line = 228 chroma clocks = 15.69976 kHz). Though this would leave a horizontal border but at least show all 320 pixels vs 6.4432 MHz showing only about 307 pixels on a typical TV. For 256 pixels, 5.3693175 MHz (342 clocks for 15.69976 kHz) is just about perfect and 240 pixels shows the same border space as with 320 pixels at 7.15909 MHz. The 5.3693 MHz clock rate is also simply 1/6 of the Panther's system clock of 32.215905 MHz. 6.2642 MHz gives ~299 pixels visible and makes for perfectly square pixels on typical NTSC TVs and 399 clocks per line = 15.69976 kHz (again, 228 chroma clocks). The latter would likely use a 25.056815 MHz crystal (7x NTSC colorburst) to feed the SHIFTER to derive the pixel clock and chroma clock from, and if they continued to have chip yield problems, wouldn't be that bad of a clock speed to drop to from the 32.215905 MHz one planned. (also would now allow 150 ns SRAM, 100 ns PSRAM, or 80 ns DRAM to be used in place of 120 ns SRAM) That or just 6.2642x5 = 31.321 MHz for Panther clock source. 288 pixels would be well matched to 3/16 the Panther clock or 6.040482 MHz, but you'd have to make do with the same fractional color clock count that the NTSC STe uses with 384 pixel clocks producing a 15.730422 kHz H-sync rate with 227.55555 3.579545 MHz chroma clocks per line. A 32.1531 MHz source clock would allow this, but generating the chroma clock now becomes more complicated with 228/2048. And at that point you might as well use a cheaper 6.02871 MHz crystal and 1/512 to synthesize H-sync, then x228 for chroma clock. (x5 = 30.14355 MHz would be OK for the Panther clock, too)
  2. The Colecovision platform itself was also based on a home computer chipset, the same TMS9918 graphics chip the TI-99/4 and went on to power the Sega SG-1000 Mk.1 and II (which did get a keyboard/computer expansion), but more notably, the MSX, which was probably the most successful use of that chipset, albeit almost entirely in Japan with a limited presence in Europe. (and possible absense in the US due to Jack Tramiel's own efforts aimed specifically at preventing Japanese entry into the North American computer market and potential for predatory business practices that might bring with it, albeit this is a whole other topic) The Adam itself might've lasted longer if it hadn't been for the fatal flaw of its tape drives combined with misprinted manuals all on top of the video game crash meaning they didn't have the Colecovision to hold them over to work out the kinks with the Adam. Granted, the real thing that got me thinking about a MARIA based computer system wasn't a 7800 add-on, but Leonard Tramiel's interview mentioning the missed potential of the TED chip at Commodore as a $50, 40 column full color text capable home computer (presumably what the C16 was supposed to have been in 1984), and while MARIA wouldn't be a good basis for something that cheap and that early, it got me thinking in terms of Jack Tramiel being interested in continuing to push hard into the low end of the computer market with aggressively priced new, competitive products. And then on top of that, the concept of compatibility achieved at the OS (or BASIC interpreter) level as well as full compatibility with existing peripherals (for the Atari 8-bit this would mean all SIO devices), but not full internal hardware architecture compatibility. With engineering resources strained and all going to completing the ST, they would only have initially had Atari's existing (production ready) designs to work with, albeit MARIA wasn't even Atari Inc's property, it was a separate Warner communications contract, which is the main thing that led to delays and protracted negotiations for payment that eventually got sorted out in 1985. MARIA did also support character modes, and only needed SRAM for the lists, so even just 2kB of SRAM + character set (or sets) in ROM and then re-using the bare minimum of Atari Inc's existing inventory (and portfolio) of custom chips and off the shelf 650x compatible I/O components to complete the system, presumably in parallel with the industrial design work going into the new production of the XE line. (like FREDDIE for interfacing 16kB of DRAM, POKEY for keyboard, POT, and joystick I/O re-using keyboard I/O lines and possibly using the 2 row-strobe lines to differentiate between 2 joystick ports by running them to what normally maps to ground) It also should've been simple to allow MARIA to use DRAM at ROM speeds with the 250 or 200 ns DRAMs already being used in the XL and early XEs (MARIA needs 3 cycles at 7.15909 MHz per ROM read, so cycled at 2.3864 MHz vs 1.79 MHz in the A8, so FREDDIE would need to be clocked a bit higher, though the CPU could also potentially be clocked at 2.3864 MHz as well, though they'd also need to switch to 1.79 MHz when accessing POKEY or also select POKEY chips that worked reliably at 2.3864 MHz). Including the full Atari 8-bit chipset and MARIA would likely be too expensive to meet the sort of competitive/aggressive market price point desired,and it wasn't until around the time of the XEGS that they seemed to be seriously considering a new, fully integrated A8 on a chip with enhancements as a replacement for that. (with the Super XE chipset project) And then with a considerably more advanced system with graphics and sound more in the range of (or better than) the STe, and coming just in time to run into the DRAM shortage of 1988/89. Granted there's several other potential options Atari Corp could've gone for in the 1985-1989 period with existing or slightly modified versions of chips they already had on-hand. On the issue of the XEGS and 5200, and computer/console overlapping hardware or repurposing (one way or the other) that came up a few times earlier in this threadL Selling a computer that doubles as a game console in general was more successful, whether that was the main marketing point of the parent company or the platform was just so well suited to such and got lots of 3rd party support. The Atari 400, ZX Spectrum. The 5200 failed to cost reduce the system sufficiently to really merit its incompatibility with the 400/800 (compared to just cost-reducing the 16k 400 model re-using the case and low-cost membrane keyboard) while also trying to push itself as a premium system that ended up being priced higher than the 400 itself during, if not before the computer price war in 1983. (and the 600XL and 800XL redesign delaying things further, even without the added delay of management changeover from Ray Cassar to James Morgan) The 5200 also simply changed its memory map and expanded cartridge address space to 32kB, but did nothing to actually ensure licensed 3rd party development/publishing, so aside from a very short period required for reverse engineering or leaked documentation, it failed the biggest practical reason for introducing the system at all. (The 7800 addressed this with its copywrite/trademark Atari Logo displaying encryption signature check at start-up) The 400 itself was already a fine successor to the 2600 and short of the specific set of advantages the 7800 itself brought, there was nothing needed but cost reduction via further internal board integration and conforming to the less aggressive FCC Class C electronics standard ... well, that and re-naming the 400 as a game machine in-name, to make it clear to consumers (something that was at the core of the later XEGS). Producing the 400 case in game console black plastic with matching keyboard color scheme wouldn't have hurt either. They really needed a cost-reduced 400 replacement to go with the 1200XL's release and instead of the 5200. The 5200's analog joysticks or any other sort of analog joystick unrelated to the 5200 controllers themselves would also still easily be possible via standard VCS or 400/800 compatible joystick ports, though they also could've easily omitted the keypad and used the onboard membrane keyboard for games demanding that complexity) The they just needed a 32kB model with mechanical keyboard to fill the intermediate gap (like a 32kB 400 with aftermarket keyboard installed) either re-using the 400's case or the 600/600XL form factor (they could also design the motherboard to either just 16kx4-bit DRAM chips or "half bad" 64kx1-bit chips, with provision to allow the full 48 or 64 kB later on when normal 64kx1-bit chips got cheap enough). There's a number of other options Atari could've done to the A8 computers around 1982, and other options something closer to the 5200 could've taken as well, but most of those aren't very valid unless they implemented authentication/lockout or genuninely made the system cheaper and more price competitive. (omitting PIA and using POKEY for I/O was still an option for the latter but using POKEY lines to read just 2 normal joystick ports, among other things) But this is all way, way outside the topic of the Panther, or Atari Corp in 1989-91 (except maybe the mention of the Super XE chipset) and worth moving to other threads to continue if anyone's interested.
  3. The NTSC ST/STF/STFM and MEGA ST computers use a 32.0424 or 32.04245 MHz clock oscillator to drive the SHIFTER and derive other clock rates from in the system. The PAL region counterparts use 32.084988 MHz and same for PAL STe (NTSC STe uses 32.215905 MHz, which is simply NTSC Colorburst (Chroma) x9, or rather exactly 3.579545 x9, though 32.215909 would be more correct, true NTSC chroma value is 3.57954545454545... or exactly exactly 315/88). The NTSC STe clock is no mystery, but the other two have baffled me for a long time and come up various places in discussion where no one seems to know why other than perhaps Atari got those specific values for cheap in bulk and they were close enough to get the proper TV sync rates. However, I think I've found the real reason: Atari engineers (presumably Shiraz Shivji specifically) wanted pixel clock (dot clock), chroma clock rates, and h-sync rates matched up so that an integer number of clock cycles for both Chroma and dot clock to avoid dot crawl or moire or other video artifacting as well as jitter or zig-zag spacing of pixels between lines and still have a compatible sync rate on top of all that for 263 and 313 line NTSC and PAL TVs and monitors. On top of all that, he wanted to get as close as possible to 8 MHz to make the most of the 8 MHz rated 68000. (side note: some people claim the 32.04245 MHz version was only in the MEGA ST models, but I've seen it inside STM and STFM boards) So if you look at it that way: For NTSC: 32.04245 MHz H-sync is this /4 (8 MHz GLUE clock) then /508 (GLUE clocks per scanline) = 15.76892 kHz then /263 (lines per frame) = 59.95788 Hz The 15.76892 kHz line rate is what's important here. NTSC Colorburst = 3.579545 MHz /15.76892 kHz = 227 color clocks per scanline. Or without any intermediate rounding: 32.04245 /4 /508 x227 = 3.579545349 MHz or rounded to the source clock's 7 significant figures = 3.579545 MHz or 32.0424 /4 /508 x227 = 3.579539764 MHz or rounded to 6 figures = 3.57954 MHz So in 320x200 16 color mode with an 8 MHz dot clock, you have exactly 508 pixels and 227 colorburst clocks per scanline, so every single line of the screen will stay in phase (or retain the same phase offset from left to right) Also, while normal NTSC broacast used roughly 15.73 kHz line rate and 59.94 Hz, the older black and white standard used exactly 15.75 kHz and 60.0 Hz, and all color sets had to be compatible with that. It's also analog hardware and needs to have an acceptably wide range of operation and margin for error to be reliable. (15.69 to 15.79 kHz would probably be generally acceptable, possibly with more preference towards the lower side of that; Chroma/228 = 15.69976 kHz is quite common among several game consoles and home computers using 262 line screens: the SMS and Mega Drive use it, and I think the NES, SNES, and Atari 7800 use that, and I think the VCS, Atari 8-bit, and Amiga all use the Colorburst/227 = 15.7689 kHz 263 lines, ignoring Amiga Interlace modes which effectively switch between 262 and 263 lines on even/odd fields per 525 line interlaced frame) And for PAL: 32.084988 MHz ST clock rate 4.43361875 MHz MHz PAL Colorburst clock sync rate= 32.084988 /4 /512 = 15.666498 kHz /313 = 50.052709 Hz 4.43361875 /283 = 15.66649735 kHz or 32.084988 /4 /512 x283 = 4.433618947 MHz or rounded to 8 figures = 4.4336189 MHz That said, the actual implementation of composite video color encoding in STM/FM units seems to often (if not always) use a discrete colorburst crystal (inside the SHIFTER's sheet metal shielded box, right next to the 32 MHz crystal). So apparently that was cheaper or easier and worked well enough to not bother with synthesizing the colorburst clock from the 32 MHz crystal source. Maybe they'd originally intended to have the GLUE chip synthesize it along with sync signals, but didn't have time or chip space to implement the x227 or x283 clock multipler PLL circuit inside it. (or indended to feed external PLL logic with the H-sync output from the GLU) Apparently they also decided it wasn't important to even have an integer number of colorburst clocks per line when it came to the STe, as the 32.215905 MHz source clock as it uses: 512 pixel clocks per line at 8.05397625 MHz = 15.730422363 kHz /263 = 59.811491876 Hz (I've seen mixed info on whether it's 263 or 262, 262 would give 60.03978 Hz). This gives 512 pixel clocks per line at 8.05397625 MHz, but you end up with 2048/9 = 227.555555556 3.579545 MHz colorburst clocks each scanline, which would mean the phase-alignment between pixel clock and color clock will shift slightly for each line and then repeat/loop that shifting pattern every 9 scanlines, so you could end up with moire and/or dot crawl patterns on areas of high contrast. The severity of that will depend on the RGB video encoder used and they may have improved enough by 1989 to not have so much of a problem, though the Mega Drive has very noticeable "rainbow banding" moire artifacts on fine, high contrast stipple/dither pattern areas or column/strip dithering. (checkerboard mesh results in diagonal bands of colors and vertical strips result in vertical bands, both of which shift and shimmer during smooth scrolling) Had they wanted to retain the exact screen timing of the previous NTSC ST models, but switch to the same 512 8 MHz clock ticks per line that PAL units used, then they'd have used: 32.29475 MHz to achieve this and derived colorburst from the h-sync output.
  4. Other than trying to send him a message on facebook, I wouldn't know where to start. Recent accounts from people who have spoken with him (or met him at one of the recent conventions) point to him being more interested in talking about this stuff than he was some years ago. (though he's much more interested in Astronomy, popular science, and education related stuff from what I've seen, more is actual passions) The fact he's been open to going to conventions and has apparently enjoyed himself is helpful. Also one thing that's been nagging me after I noticed it again on your site, regarding the Slipstream 4 Version 3.3 documentation. Described as: "Slipstream Rev4 Reference Guide v3.3 5th October 1993 The 3rd document from John is thankfully in perfect condition and required no fixing. It is from a much later SS4 revision of the Slipstream chipset and now mentions CDROM, and the 386sx processor This is the last possible step in the Slipstream product evolution before Flare developed the Atari Jaguar Flare Two based hardware. It will be interesting if any Jaguar enthusiasts can spot any early signs of Jaguar technology in there..." The data of that revision of the document is after Atari had locked down the Jaguar for mass production and about a month before the limited US release in November of 1993. But that's a late revision (Version 3.3 after all) of the SS4 chipset and there's no other, earlier documents for that version. Were there any indications from John Mathieson or Martin Brennan where that stage of the Slipstream fit into the timeline? It's got some features the Jaguar Blitter got (gouraud shading and bitmap rotate mode for scaling/rotating sprites and texture mapping polygon spans) but is otherwise a much more straightforward refinement and enhancement of the Slipstream ASIC (much, much simpler and probably a good deal cheaper than the Jaguar's TOM and JERRY). The simplicity of it points to it possibly dating to before the Jaguar seriously got going, or before the first prototype chips taped out in late 1991, or early Jaguar development and further development of the Slipstream could've been in parallel all "on paper" all around the 1990/91 period. There's also nothing on the Slipstream 3, or what that stage of development looked like. (The Slipstream 2 being the 1989 8086 based Production version of the Slipstream for the KMS). Otherwise there's the possibility that they did most/all of the added design work on the Slipstream 4 after the Jaguar was set for mass production in late 1992 (or at least that's when the final revisions of TOM and JERRY went out for production, a good 10 months before any saw store shelves). The SS2 would've been aging quickly by the time the Panther was formally cancelled in Spring of '91 and the Jaguar obviously a good way off (still entirely on paper design), Atari was hurting for new projects and the SS4's specs (and projected clock speed) are much more feasible for a low-cost early 3D based console with decent-ish 2D capabilities. (by console and arcade standards) Plus the Jaguar's texture mapping function has no effort put into it to use the full bus with or speed, so would've been just as fast on the SS4 blitter. (ie texture heavy games bottleneck the Jaguar so badly that the SS might have done them faster, plus the Slipstream at least has a 16-bit bus latch to reduce host CPU bus conflicts, the Jaguar doesn't even do that for the 68000 ... and that's assuming Atari went with the Cheaper 68000 option and not the 386SX Flare was suggesting or Holloway had requested) On a side-note, there's some interesting, possibly ironic parallels between Konix's MS development and Epyx's development of the "Handy" (that became the Lynx) with several of the Amiga engineers doing that work (some of them Ex-Atari Inc engineers as well) vs Konix with ex-Sinclair engineers. Both Epyx and Konix having significant game joystick product lines and sharing licenses of some of the same joysticks. Both companies running out of money to actually bring the product to market, and Epyx ended up handing it off to Atari (who they'd already been making games for and had a connection to with Mike Katz running Atari's Entertainment division had previously been president of Epyx, though he left before the Lynx was released), but Konix ended up selling off their joystick line and betting the farm on the Slipstream rather than finding a business partner to front the necessary resources to bring it to market. (Atari was only in the very early stages of the Panther concept in mid '89 and had just prior been working on the Super XEGS concept with plans to source chips from Ricoh in Japan, so they were looking for a new 8/16-bit game system or console/computer hybrid to fit that side of the market) Granted, if Atari had the slipstream, or localized it for North America, it probably would've been more like the XEGS if anything in as far as a disk drive being a separate accessory and probably not using the built-in modular controller Konix was using, but probably considering keyboard and mouse accessories as well as Konix's modular controller as a separate accessory.
  5. @Crazyace Prior to SIMMs, there were at least SIPPs which are pin compatible and use identical headers on motherboards. The idea isn't just for expandability, and in fact I don't think Atari added them to the STe for expandability, but instead to reduce cost and to deal with the DRAM shortage and import restrictions. Also critical for both assembly/sub-assembly or user (or service center) installation: SIPPs have symmetrical power and ground lines, so plugging them in backwards is harmless vs instantly destroying most DRAM chips (I've done this myself when working on some 286 builds). Ram expansion boards had no import restrictions from Japan, so Atari could have assembly done in the US as well as overseas without any legal or supply issues that using bare DRAM chips would require. I think opening up and changing the ram on an STe voided the warantee (though could be cheaply done at service centers) just like with older models. And if Atari actually wanted to go the flexible, open-box expandability route, they could've just made an ST motherboard in AT or Baby AT standard PC case form factor with AT standard power connector + expansion slots in standard AT locations (and I/O headers + brackets filling the first few slots) ... I'd argue they should've used standard AT bus slots too with simple byte swapping of the data lines for big/little endian conversion (or at the address lines for 8-bit cards) ... you could've had dedicated ST-compatible BIOS versions of some boards, but also just potential for PC BIOS emulation/initialization of standard cards. (plus AT bus = IDE bus = cheap hard drives) Or at very least use the common PC clone form factor with expansion slots + onboard I/O like the Atari PC 4 (mostly) used, Compaq style or Tandy style. (and low-cost oriented like Tandy, unlike the Amiga 2000) Anyway, from 1987 to at least 1991 the US had set price floors for importation of Japanese Dram chips into the United States, and on top of that you had issues with poor yields of 1Mbit DRAM chips for many manufacturers combined with already having shifted production tooling away from 256k DRAMs (or at least didn't expand production due to investing in 1Mbit chips instead), so you had a combined shortage and even more acute price increase for domestic US DRAM prices. That meant Atari would have to abandon any attempts or plans to move or expand final assembly (or upgrades via socketed DRAM, potentially at their Federated locations) and have to deal with higher DRAM prices across the board, even if not quite as bad for overseas production. (CBM didn't have any such problems in Canada or at any overseas facilities outside of the US) Atari also got in trouble for importing and reselling 256k DRAMs around '88/89 (apparently mostly limited to within Silicon Valley ... including several hundred thousand dollars worth to a vendor in Morgan Hill: a semi-rural town just south of San Jose, that was more rural back then). They claimed it was surplus stockpile, but were brought up on charges of smuggling that dragged on, got dropped, re-examined, and I think no legal consequences came of it. That's strange overall given they had trouble keeping ST prices low due to the RAM shortage and any sort of surplus wouldn't make sense unless it was specifically 150 ns chips that failed at ST timing specs. (plus Atari used a lot of Korean DRAMs, so it would've only been the Japanese ones that were problematic, many of which were purchased before the restrictions began) They were supposedly 256k chips, so not the 64k chips used only in 8-bit models, and also not the 64kx4-bit chips used in the later model XEs since those were all newer chips that shouldn't have failed in STs (though I don't think the ST ever used 64kx4 DRAMs anyway, just the XEs and Lynx). Still weird given Tramiel had previously been known to find new product niches to use surplus components in, and there's all sorts of things you could do with 150 ns DRAMs that only run at official specs and not slightly faster. (256kB 260XE would be one possibility, or an ST at 7 or 7.5 MHz with all the same chips except GLUE, providing different dividers for video sync timing, then just sell those models at a discount; 7/14 MHz pixel clock is still fine for 320/640 pixels on TVs and 28 MHz would probably still show 640 pixels in hires, or might need different monitor calibration: albeit an unmodified GLUE would output VGA monitor compatible timing if clocked at 7 MHz or technically 7.047 MHz since /224 = 31.46 kHz /500 = 62.92 Hz, so VGA h-sync and within the 60~70 Hz 525~449 line specs, with pixel clock slightly lower than 720 pixels wide VGA and well above 640 wide's 25.175 MHz) So quick dirty and cheap option would've been a monochrome-only ST at 7.05 MHz and very conservative DRAM timing for 150 ns chips, plus possibly using cheaper, generic monochrome EGA/VGA compatible TTL or analog grayscale monitors tolerant of that sync rate (not standard EGA, but a lot of Super Herclues or Super EGA modes used 31~32 kHz, many were also ST monochrome 35.8 kHz compatible, and for that matter, a cheap NTSC clock crystal at 28.63636 MHz like the Amiga used could provide 7.15909 MHz CPU + GLUE clock and 31.96 kHz, 63.92 Hz and work with a ton of different monochrome monitors of the 1987/88 period). But I could be wrong and they didn't just have a parts surplus and genuinely were trying to just smuggle DRAM to make money on the side, but that seems pretty stupid, especially while Jack was still CEO, or even just after Sam took over. (and a hires monochrome only ST at discounted price probably still would've sold ... no good for games, but most of the serious computing and music/MIDI oriented stuff was best in, or required hires anyway ... then again, the print ads and TV ads I've seen from back then did fail to really emphasize the strengths of 640x400 monochrome) Also, I just stumbled on the Super XEGS documentation from 1988, which I'd missed back in 2019 when it was posted on the Atarimuseum, it sheds more light on what Atari Corp was thinking and just how much effort they were putting into trying to producing a new game console and an extended 8-bit family system. At first I thought it might have been more like the Panther and even used the same GAME SHIFTER line buffer chip, but it looks like it only needed a single 320x5-bit buffer, not 2, and had a much more Sega/Nintendo/Hudsonsoft (PC ENgine) style of tile + sprite architecture, but with expanded bitmap framebuffer modes more on par with the ST or Apple IIGS. It's not very MARIA-like, where the Panther definitely seems more like a MARIA inspired architecture with complex/flexible object/display list, but cutting out the character mode entirely. (the Super XE docs even have "Display List Architecture" crossed off at one point, so it seems like they considered expanding on the A8 display list, but decided against it, too, the opposite direction the Panther took) It also has a full-screen background rotation mode, like SNES mode 7, though was probably going to be a linear framebuffer (and is 4bpp 16 colors not 8bpp 256 colors) rather than the SNES's tiled mode 7. They also went the opposite direction there with the Panther, using scaled/zoomed sprites rather than a single scaled/rotated background. Atari was working with Ricoh at the time for a planned XE-on-a-chip ASIC with 4/8 MHz 65816 (8 MHz with 64 bytes of on-chip RAM, it looks like) and external DRAM controller ("New FREDDIE") . It makes me wonder how much common engineering went into the SNES and this Atari Project, or if at least the Mode 7 feature was related to this. That said, I'd criticize lack of unified development of a cheap, multi-purpose ST SHIFTER + game console useable chip, in general terms, or at least limited to 2 progressive designs in parallel that replace the ST SHIFTER, ANTI+GTIA, and TIA+MARIA. Ie not compatible with all 3, but paring their line-up down to just 2 platforms and just 2 graphics standards. (so have an A8+ST compatible upgraded video chip plus an enhanced 7800 on a chip with better graphics and faster CPU core: since it's 650x, lots of options for manufacturers with existing licenses, fewer if they had to have an '816 license, but VLSI had both, and they used them for the Lynx), or have an upgraded ST architecture plus A8+7800 combined on a chip, the latter could approach some of the Panther's features but use a fast, embedded 65816 core instead of external 68000. (in fact, both could use different ASICs with embedded 65816 cores, and could be used as a dedicaed sound+IO coprocessor on an enhanced ST or TT) Plus the Super XE targeted a 5-bit line buffer which could've been empoyed for new 5-bitplane modes where A8 extended packed pixels couldn't do that (albeit I think the line buffer only needs to be used for sprite+character modes, hence why 640x2-bit screen modes can be supported, likely without using the line buffer, unlike MARIA where the line buffer can be confugured as 320x3 or 160x6 ... though only 25 colors are useable so it's no better than 160x5) I used to think more in terms of just discontinuing the 8-bit hardware and focusing any new console graphics on something also useful for an upgraded ST (fewer chips to produce), trying to think in terms of the Tramiel era business model, but they clearly had a lot more interest in extending the 8-bit line than I thought, which changes things. Though one idea was taking the SHIFTER and adding MARIA style DLLs and display lists for a flexible framebuffer + object system and double line buffers to freely allow all the extra SHIFTER DMA cycles to be used (all the wasted H-border bandwidth, or 256 bytes per line using STe MCU timing or PAL ST MMU/GLUE timing), but then also enable bus saturation modes for up to 512 bytes per line, plus flexible use of the line buffer for higher color depths at lower resolution. (albeit MARIA already has 960-bit line buffers, so 960x1, 480x2, 320x3, 240x4, 192x5, 160x6, 120x8, and you'd only need 1280-bits minum to support all existing ST modes, or less than that if you restricted some modes to single-line buffer orientation with single framebuffer only, possibly for bitplane modes only, and in which case MARIA already has 1920 bits = 960x2, 640x3, 480x4, 384x5, 320x6, 240x8; and if they wanted to be really cheap-but-better-than-Amiga, use just 32x9-bits for CLUT and max 6 bitplanes for 64 halfbright colors mapping 9-bit CLUT entries into 12-bit space, as good as Amiga, but using less CRAM, plus re-mapped optionally as 16x18-bit for VGA quality 16 colors and an 8-bit ROM palette mode simulating the 256 color MARIA palette in chunky pixels, or 8-bit RGB, or 32x9-bit CLUT colots x 8 shades in 18-bit colorspace; you could do other CLUT re-mappings, but I think 3-bit vs 6-bit channels would be simpler to design) Also, using MARIA's existing DLL format, if you extended that to 4 bytes (also good for a 16-bit bus) you could extend the Zone selection from 2 bits to 5 for 8 to 256 lines, so you can do 1 big display list per TV res screen or 2 lists per 512 line screen (plus 3 more control bits for other things, like per-line or per display list colorspace mode select: ie 9-bit, 18-bit, 8-bit ROM, 8-bit RGB, 8-bit CLUT+shade, maybe others). ANTIC+GTIA are older, relatively low transistor count and simpler chips, so maybe it wouldn't have cost much to add those in as well. (or been almost free transisor cost wise, but require more engineering effort to implement them seamlessly vs just chunking in an extra logical block ...) OTOH dropping both of those ideas in favor of a blitter capable of hardware scaling and rotation (which would double as texture mapped spans for polygons or 3D plane effects) would've done all of the above and been a lot more flexible, at the expense of being slower for a given graphics bus bandwidth (ie slower if just emulating the fixed features of scaled sprites OR one rotated background, but way more flexible and more efficient for many situations where you don't need that full fixed-function effect). Absolute minimum addition to the Slipstream would've been hardware rotation and you'd have something impressive for 2D/pseudo 3D even if arcade and console style 2D games would run slower than the competition. (but OK by 1991 PC VGA standards, or a bit later even. I'm not sure, but the Slipstream seems like it could pull off Jazz Jackrabbit, ignoring RAM constraints ... since it uses relatively few and relatively large sprites and a 2D background with no parallax and limited foreground priority layers: the latter reduced at low detail settings) Early 90s PC style space/air combat sims and fantasy games, ray-casting FPSs, and graphic adventure games could've been their niche while having some overlap with common 3rd party console/arcade games. (an ST mouse pripheral would make a lot of those games easier to play ... a keyboard would be nice, but not as necessary; FPSs and point and click adventure games would benefit a lot, sim style shooters would be better with a keyboard + analog joystick) IMO, offering PC style interface pripherals would've been more useful than the angle Konix was going for with "unconventional" game console interface, plus it wouldn't have been far off the plot Atari was going towards with the Super XEGS. Sort of like what the Mega CD supported, but without the need to render into tile order, but just use 4-bit or 8-bit linear packed pixels. Basically, add a scaling/rotation effect to the Slipstream chipset and you'd have a powerful and unique feature set able to do the things Atari was already interested in (at least in 1988 and 1989) before Martin Brenan even came on board and before the SNES specs were public (or possibly before they even had that feature), but taking the simpler approach the flare team did of "do graphics one way, the best way" rather than the background + sprite hardware + blitter of the amiga or other game consoles (sans the blitter). Obviously if they were doing a re-spin of the slipstream hardware itself they could fix or improve other things, too, like using a faster DRAM controller (to do 2 tick DRAM cycle times) or faster chipset as a whole (stick with 3 tick DRAM cycle times but run the ASIC at ~20 MHz with more variable pixel clock ... or 21.48 MHz with 5.37 MHz pixel clock just for the common 256 pixel width resolution) and/or adding page-mode support for blitter operations, or at least for a fast fill function (if there's no line buffer or at least FIFO of some sort, you'd still have to do interleaved video data fetches, but screen clearing during hblank or vblank would be possible, and just intermitted faster fill operations if not synchronized with screen timing). The sprite drawing function already only works 1 pixel at a time, so does the vector line drawing function, so keeping things just as simple with a texture mapping feature would make sense (that's all the Jaguar's one is anyway). Since the blitter mask register already supports 4-bit masking for 4-bit sprite data and 4 bits for CLUT offset (16 sets of 16 colors) you could easily extend that feature to do 16 shades/light levels for flat shaded texture mapped 3D as well (or drop to 8 light levels with 2 16 color texture palettes to choose from). Granted, rotation is more computationally intensive than just scaling, so it might make sense if the scaling mode could be done "free" (as fast as un-scaled blitter sprites) but rotation might be slower. Also they might have implemented the scaling/rotation routines by duplicating the DSP logic block and adding an embedded ROM to run on, but still allowing that second DSP to be programmable when scaling/rotation mode isn't enabled (sort of like the SNES's mode 7 multiplier unit being available when Mode 7 is off, except a lot more useful than a bare multiplier). The Jaguar itself was too risky and ambitious to be the ONLY design. If they ever thought it could be production ready by 1992, it was severely over-optimistic, and using Standard Cell logic was faster and denser than Gate Array logic (as the Panther and Slipstream used) but meant longer delays between tape-out and silicon, and between silicon revisions, and restricted which manufacturers they could source chips from. Plus putting so much into TOM meant compounded bugs were more likely and a bigger, more complex chip to revise more slowly. Like if it had still been a 2-chip design with the same motherboard footprint and pin count (similar cost to physical chip packaging), but removed the GPU RISC chip from TOM and implemented it on a second chip (putting the GPU onto the sound/IO chip and intending it to do double-duty sound + 3D math and game logic when possible) and initially used gate array logic, possibly at more conservative targeted clock speed, and intending to allow both chips to run at different speeds if necessary would've all been a lot more foolproof. Then just plan a single-chip Standard Cell implementation as either a cost reducing measure or as full successor with more features or just higher clock rate. They could've hedged their bets even further by having an alternate sound+I/O ASIC that just used the simpler Slipstream DSP plus separate DMA sound circuit that could read from DSP RAM or main RAM (so DSP-math heavy 3D games and such could at least use simple DMA sound for sound effects) and have that chip act as a bus gateway between the 64-bit bus and the host processor, possibly with a 16-bit wide DRAM controller suitable for a 68000 and narrower/cheaper 16-bit wide connection to cart ROM. (since you now lack the GPU to act as CPU, local RAM + ROM access separate from GPU RAM would be better than trying to do a true single-bus system) Except even then, I'd argue the Object List processor was overkill. If the Panther had been released, then having an enhanced and backwards compatible OLP inside a successor would make sense, but after the Panther's cancellation, they could've cut that out and beefed up the blitter to work with a fast, but much simpler display controller (with short line buffers or FIFOs sufficient for good page-mode performance) and then put more chip space towards dedicated blitter texture RAM (for fast sprite/tile rendering and texture mapping) and fast blitter scaling. (either dedicated scaling at full bus speed, or enhancing of the texture mapping rotation feature to full bus speed, at least when reading from internal texture RAM) A dedicated blitter CLUT for color expansion (and fitting more into internal texture RAM) would've been more useful as well, or just using 1 256 color CLUT, but give it to the blitter when rendering 16-bit direct color and to the VDC when doing paletted framebuffer modes. The object processor's sprite system really didn't make sense for any 3D or pseudo 3D game where the blitter rendered any sort of foreground or terrain or scenery, since you can't composite OP objects on a per-pixel basis with a blitter scene (there's no per-pixel priority in the framebuffer to support it, and the OLP can't use the Z buffer for object priority either) so the fast scaling sprite function is only any good in an all-sprite game and the much slower blitter rotation mode has to be used for scaled sprites in any other situation. (Even in a game like doom where the full scene is texture mapped, the enemy sprites and projectils really, really slow things down, which also means the higher difficulty settings run noticeably slower than the easy ones: I think minimum difficulty hits 30 FPS fairly often, but usually below 20 at high difficulty settings ... granted, Doom would be a good example where it'd run much faster if only a dedicated un-rotated scaling function was added to the blitter, since columns and sprites use straight scaling, only the floor/ceiling spans use affine mapping style rotation) Well, that or using an embedded 65816 inside the sound/IO chip along with the DSP (or J-RISC) as the host processor working in its own fast scratchpad RAM (and due to 650x access timing, you could allow the DSP or RISC "free" interleaved access to 8-bit SRAM at half speed, or DMA to/from that scratcpad withotu slowing the '816) and have a second 65816 core inside the blitter+display controller ASIC for blitter handling, so it can "babysit" the blitter full-time without the massive waste that using the J-RISC for that would be. (have a blitter command list in 8-bit scratchpad RAM and the 65816 poll status or receive interrupts for each blitter operation). They could've even done embedded 68HC000 cores, but that's more transistors, arguably no better performance, and a more limited (but still decent) selection of manufactuers with 68k licenses. (Hitachi, Toshiba, and Motorola come to mind, and with the 80 ns Standard Cell ASIC process Atari did use for the Jaguar, Toshiba and Motorola were both relevant, but a 65816 could be had for really cheap) Plus 8-bit local external sound/CPU bus + cheap 8-bit wide ROM used only for streaming sound data and bulk storage would also cut costs for ROM (sort of like the hi speed serial ROMs the N64 used) and get you 5 MB/s DMA transfer rate if using then-common 200 ns ROMs. (Atari used a lot of Macronix roms for the Jaguar, and the slowest grade available in 1993 in their databooks was 200 ns ... Atari was also using PROMs and not Mask ROMs, which reflects their low production numbers vs cheaper high volume mask ROMs) And I know 6502/816 cores suck at using C code, and some compilers were especially awful, but having something like 10-16 MHz core-clock in internal SRAM should've helped a lot and been better than a 13 MHz 68000 without any local RAM at all. (and still even better off than the 68k in the Jag if cart ROM bus was separate from main DRAM bus, and better still if there's some external local sound bus DRAM, PSRAM, or SRAM, even just a single 32kx8-bit chip) Actually, deleting the external CPU of the slipstream and using an embedded 65816 instead, and sourcing chips from VLSI (as they were with the Lynx) would've probably be cheaper and faster than the existing 8086 system. Or, with just a 6 MHz 65816, there's no "probably" it would've been definitely and quite substantially faster, but it probably could've been 12 MHz internal, 6 MHz external in 100 ns PSRAM, and 3 MHz external in 250 ns ROM or DRAM, plus the 16-bit CPU data bus latch could run at 12 MHz and used to accelerate sequential byte accesses for 16-bit CPU operations, so every other byte fetch is full 12 MHz with the other bytes being 6 or 3 MHz depending on data location. (if they'd released the initial system around 1990 as-is, with all the internal modifications related to adding the 65816 + scratchpad RAM, Flare/Atari could've stuck with using PSRAM and planned to release a DRAM-based version later on) Plus the 65816 is very fast for doing simple I/O handling stuff and almost anything that conventional 2D video games need, plus very fast response time for driving the blitter and very fast interrupt handling. And the SNES used it, just vastly slower than what I'm suggesting. (more like the SA-1 chip some late gen SNES games used) A single chip ASIC line that + 128kB of PSRAM should've been cheaper than the Panther with 32kB of SRAM + 68000. (32kx8 PSRAMs were probably a little more expensive than 8kx8 SRAMs, but the cost of the external 68k would probably more than make up the difference) OTOH using just 2 128kx8-bit PSRAMs for 256kB and less board space used might be better. If they spent the time to also add the scaling+rotation blitter feature, it probably would've been 1991 already anyway and 256 kB would make even more sense. OTOH, if you added an external sound chip or added a simple DMA sound circuit instead (to drive the PWM DACs and free up the DSP), you might get enough performance with the DSP alone doing scaling/rotation effects in software or with an embedded ROM specific to that function, to keep DSP RAM free for other things. Also CLUT RAM is already 16-bits wide, and if they used external resistor arrays for video DACs, it would've been trivial to expand the 12-bit RGB space to 16-bit, or a bit less simple (but really neat) to implement a custom 16-bit RGBI or RGBY colorspace with 12-bit RGB and 4-bit intensity/luminance mapped into 24-bit colorspace. (that way you'd get true 16 shades of any 16 colors from 12-bit RGB and could add a 128x16-bit direct color mode allowing flat shaded lighting effects by simply setting the 4-bit intensity element; you could also give all 512 bytes of the CLUT RAM to the DSP in this mode) Given the specs and intention of it being a game system with home-computer useable system, the Super XE would have beating the STe's game capabilities and even had some raw processing power advantages (for reasonably optimized 65816 code it should've been much faster at a number of things). Where it could've filled the low-end computer + (roughly) Amiga class graphics + sound and let them focus on cost-reducing the TT architecture as the baseline standard ST successor moving towards the Falcon. Plus the TT SHIFTER was fed externally with at least some of the address and DMA logic shared with the MMU (at least I think it's more like the original ST in that sense) and revisions on the MMU side of things could've allowed cheaper 32-bit or 16-bit wide DRAM to still allow some or all of the TT modes. (plus, unlike the Amiga, the ST/TT used word-interleaved bitplanes, so burst-mode DRAM access could be used without much complication, so with FIFOs filling 64-bit wide SHIFTER data latches or however it's configured internally for 8 16-bit plane words in the TT SHIFTER, you'd have a narrower DRAM bus "appearing" to be 64-bits wide to the TT SHIFTER) The ST does a weird memory address arrangement for each word being in a different DRAM column (thus refreshed automatically during video DMA, leading to massively excessive refresh counts in the ST) so keeping that scheme makes page-mode not useful at all, but even if you kept that refresh scheme, you could change it to change rows every 64 or 128 bits and still refresh automatically and allow page-mode or static column mode to be used effectively (with or without bank-interleave as well). OTOH they could drop that scheme entirely and just have constant minimal refresh handled by the MMU, or done via the DMA sound circuit (use sound words stacked through DRAM rows and a looping buffer 512 words long for up to 512 or 1024 DRAM pages and do dummy read cycles when sound is off/muted). Whatever the case, if you have the MMU do only minimum refresh during vblank and no refresh during H-border region, you'd also gain the ability to give extra cycles to the CPU or blitter, and use a 32-bit latch for CPU access if using a 16-bit bus. (without the latch you'd need the faster bus cycles of a 68030 to "soak up" the available bandwidth in 16-bit bus mode, and that probably worked for the Falcon, but I'd think the cost disparity between an '020 and '030 in 1989 or 1990 would've been more significant) Also they could/should have just had faster ST versions prior to that. Do that either via faster CPU+MMU clock and fixed 32 MHz SHIFTER clock rate, or just faster CPU + local bus DRAM controller allowing 0ws 68000 or 68020 bus timing using slower/cheaper DRAM with less aggressive timing than needed for interleave, plus potential page-mode support: in the latter case, all 120 ns DRAM could be used for 0ws 16 MHz 68k/020/030 bus cycles with 125 ns access and cycle times, or short of that, put 64kB of SRAM or PSRAM as local CPU RAM (128 kB for 32bit bus). You'd also need to boot the CPU at 8 MHz for slower ROM speed or use fast ROM for faster CPU. (performance wise, FastTOS ROMs would be good ... and the MMU already handles wait states for DRAM access, so I don't think that needs to be modified) I think the SHIFTER can still get sufficient DMA cycle access times to work at 32 MHz (ie normal speed) with MMU at 20, 24, or 32 MHz, but I'm only 100% sure of the 32 MHz setting. If the slower settings worked, there's really no reason Atari didn't at least come out with a 10 MHz ST using a mix of 120 and 100 ns DRAM that fit within 20 MHz MMU tolerances. And even so, if they hadn't ever released the STe, they at very least had the potential to continue using the older ST chipset with 16 MHz 68000 as a lower-end option to the TT once 70 (and certain cases of 80 ns) DRAM became cheap enough to do that with. (though I'd argue a 16 MHz 512kB ST with 4 256kx4-bit 70 ns chips could've met an appealing price point by 1990, if not 1989) I'm not sure if use of the STe's external video sync mode would allow MMU overclocking like that, I haven't seen any such attempts. (even then adding a CPU fastRAM style bus would work around that, or just adding bank-interleave to the STe and using fast enough DRAM timing to allow 0ws 16 MHz 68k operation in one bank and normal slow 500 ns access slots while working in the other, still with up to 2 MB in either bank, but just 512kB in the video bank probably being the popular option, and the Blitter and DMA chips, everything on the 68k bus side could also be doubled, blitter would just get 68k style wait states when accessing the SHIFTER bank) Albeit instead of developing the STe features, just integrating the custoom ST chips into fewer chips would help ... except you'd also lose the ability to run the MMU, SHIFTER, and GLUE at different clock rates (required for the 16 MHz mod), though at least expanding the SHIFTER's address range to allow for overscan would be nice (or modifying the MMU to allow multiple 32k pages per screen, even if it requires a CPU interrupt to set the next SHIFTER base address during h-blank). More bitplanes would be nicer, but definitely use more chip space, though I don't think so much so if adding packed pixel modes, or at least much cheaper gate-wise than expanding the SHIFTER bitplane buffers beyond the 2 sets of 4 words it uses, the simplest probably being 4 MHz 8-bit chunky pixels bypassing the palette and using 8 of the 9 digital RGB outputs. The 1-bit monochrome mode already works with linear chunky pixel addresses, so you'd just need to have the 1-bit shifted output accumulated to 8-bit words and latched every 8 32 MHz cycles. (in fact, it might be possible to mod/hack this externally by hooking up the monochrome pixel output line to an 8-bit shift register, or chained pair of 4-bit shift registers, feeding the 8-bit latched output to 8-bits of the RGB resistor array, then putting the SHIFTER in monochrome mode while setting the GLUE in NTSC or PAL color mode, so you'd get a 4 MHz pixel clock with 256 color 8-bit RGB for 160x200 ... or 320x200 with a 16 MHz overclock and 64 MHz SHIFTER ... and at least some ST SHIFTERS can already run fine at 64 MHz) You need to use the software overscan hack to extend the video height to 640x200x4bpp though (not much CPU overhead needed for that though) otherwise video ends at 320x100 lines with garbage for the remaining 100 lines. https://blog.troed.se/projects/atari-st-new-video-modes/ Actually supporting a 256 color entry CLUT would add a lot more to chip space used and more changes to internal SHIFTER logic to allow it, since the pixels need to be in packed format before passing through the CLUT where monochrome mode does linear pixel order by bypassing the CLUT entirely. Albeit they could've started with a cheap direct 8-bit RGB and later switched to feeding the 8-bit output to a VGA RAMDAC for 18-bit RGB CLUT. Use of the RAMDAC would be more relevant if they'd done a 16 MHz system clock version recycling the old SHIFTER, but allowing a linear 320x200 256 color chunky pixel mode identical to MCGA/VGA Mode 13h. (dumb framebuffer, no hardware scrolling ... but better than VGA or MCGA in that it could multi-buffer and page-flip at least) TT SHIFTER has the bandwidth for 320x200x16bpp highcolor at TV resolution, like the Falcon. (had they only included an ST-compatible CLUT inside the TT SHIFTER and hooked it up to an external RAMDAC, 8bpp could've been initially supported with optional 16-bit highcolor RAMDAC ... except that won't work at VGA resolutions unless you have a line buffer or enable bus saturation during active display)
  6. OK, so I read through the document more completely. The feature set is nothing like the 7800 or Panther. It also doesn't look to need double line buffers like MARIA or the Panther (or Jaguar) used, but just a single 320x5-bit line buffer for loading background and sprite data into. It's much more orthodox or standard for the late 80s and early 90s game consoles, except it includes bitmap framebuffer modes as well, like GTIA does, but higher res and color depth, more like the ST. Except it also has a full-screen background rotation mode like SNES Mode 7 (but in 16 rather than 64 colors). You've got bitmap framebuffer or tilemap backgrounds, 12-bit RGB colorspace like STe or Amiga. 64 hardware sprites per frame (without multiplexing) Sprites are 16 colors (presumably 15+ transparent) 16 sprites per scanline, 2-bit size select for 8 or 16 pixels wide and 8 or 16 pixels tall. All sprites can be their max 16x16 per line, so 256 sprite pixels per line. 1-bit palette select from 2 16 color palettes (32 colors on screen total, shared with background) 2-bit sprite scaling for both V and X (1x, 2x, and 4x setting for width and height) somewhat like the VCS does (could help with scaling/zooming animation with animation frames filling the in between steps or just BIG chunky coarse pixel sprites) 8bit V and 9-bit H position (512x256 screen/off-screen space) Tilemap background (character mode) using a set of 256 8x8 character cells per frame 40x24 character map (320x192 pixel screen) 2-bit per pixel (4 color) 8x8 character cells, 8 4-color palettes can be selected per character. 1-bit transparency flag allows 3 colors + transparent instead of 4 colors. (selected per tile) 2-bits for horizontal and vertical flipping/mirroring of tiles per-scanline scrolling effects per-column scrolling effects (usually used for screen-tilting as seen in Mega Drive games) I worked out the math and it seems like sprites are loaded during H-blank and active screen space is in a similar area to the ST or C64, ie larger horizontal border space than the A8 or Amiga, and using an 8 MHz pixel clock like the 320 wide modes on ST and C64 vs 7.16 MHz on the A8 and Amiga. With 512 8 MHz cycles per scanline, and 320 for the screen, this leaves 192 for border area, and 192 bytes is exactly what's needed for 16 sprites using 4-byte sprite list entries and up to 8 byte (16 pixel) width. This also means peak bus bandwidth is 8 MB/s, possibly using bank interleaving or page mode (there's 2 8-bit DRAM banks, so bank-interleave would work). CPU only accesses DRAM at 4 MHz, though, so the chipset can soak up more bandwidth than the CPU. Bitmap framebuffer screen modes: 320x192x4-bit (16 colors) stealing 50% of CPU cycles 640x192x2-bit (4 colors) stealing 50% of CPU cycles Special 320x192 16 color mode with full screen rotation (possibly scaling/zooming too, given the same hardware needed for rotation normally does scaling as part of the same math) This mode uses all bus time (steals all CPU cycles) when displaying the screen. (CPU could work in vblank, or on screen lines where that mode isn't used) The CPU runs at 8 MHz internally and 4 MHz when accessing DRAM. Internal RAM is listed as 512 bits, and either that's a typo, or it's 64 bytes of on-chip RAM. It could very well be the latter. The 8 MHz speed switch is done automatically when accessing that on-chip memory area. (also probably all the chipset register space as well, graphics registers, I/O ports, etc) That'd give a tiny scratchpad to use for bits of code and data, most likely mapped into zero page (or potentially able to be split for 32 bytes in page 0 and 32 for stack use or something like that). As zero page pseudo-registers that could probably make decent use of the 8 MHz speed and allow for compiled code in C to work much better than usual on 650x processors, especially sloppy C compilers that use zero page as 68k or x86 style register space. (ie lots of loading into zero page rather than executing outside of zero page; but with that scratchpad area running 2x as fast, you'd at very least lose less performance for that sort of code, and in some cases it might even run faster than not copying into fast RAM) The sound system also doesn't appear to be the Ricoh sound chip I was thinking of, it's just a simpler set of 8 sound channels with 4 fixed sample rates (so no note-scaling via sample rate like the Amiga does, you need samples pre-scaled in RAM). You set the sample length and loop them, so it's still better than software mixing on the STe. It's closer to the DMA sound on the Falcon, but 8-bit and 40 kHz max. The 16 color 320x192 mode would probably be useful for doing some software sprites and/or more complex backgrounds than the tiled mode, especially if packed pixels are used and not bitplanes. The A8 hardware uses packed pixels, so that would make sense, and that rotation mode would need to use packed pixels to work efficiently. (then again, the SNES uses bitplanes for sprites and tilemap graphics except for mode 7, which uses 8-bit chunky pixels) No mention of hires monochrome mode, like the ST has, even though it'd use the same bitrate as the other 2 bitmap modes (4MB/s for those, same as all Atari ST screen modes) and 640x400 71.5 Hz monochrome would be a really nice feature to see in a low-cost 8-bit computer. Or similarly, they could've supported VGA analog RGB modes in 2 colors or 320x400 4 colors (or 320x200 line-doubled), 320x200 16 colors line doubled too if they used full 8 MB/s. Also no 32 color bitmap mode, though that would have to use bitplanes, it would fit into the 320x5-bit line buffer, just with a bit more CPU cycle stealing. So you'd have a 32 color Amiga bitmap mode for graphics/art stuff and some games. Better would be using a 6.4 MHz pixel clock (8MHz x4/5) and use the same 50% CPU cycle stealing to do 256x192 32 colors with sprite DMA, or 320 pixels overscanned. (about 300 pixels visible on most TVs) 6.4 MHz pixel clock is also close to square in NTSC 60 Hz TVs and monitors, so even better for art. Also no 256 color mode, which it lacks the palette RAM to do, but since it must have a ROM look-up table for emulating the 256 color A8 palette, they could've just used that for a 160x192 256 color mode (also good for 3D or pseudo 3D with chunky pixels, or for a 256 color version of that screen rotation mode). This would also be easier to add to a chunky pixel system vs any bitplane mode where you need extra on-chip buffers and bit-shifting logic to convert bitplane data to screen pixels. Technically, the 320x5-bit line buffer could be used for 200x8-bits and a 200x192 256 color screen. (you'd want a 5 MHz pixel clock to allow sprite DMA, or just use 6.4 MHz for square pixels and a larger border) You also can't use the palette data for sprites anymore, so they'd be just 8 pixel wide 256 color sprites. You also couldn't place sprites outside of that 200 pixel area. OTOH 160 pixels wide could use a 10-bit line buffer allowing 256 color framebuffer and normal 16 color sprites for 256+32 = 288 colors on screen at standard C64 160x192 pixel size and screen size. That also means those 16 sprites can more than fill the screen width and would give you some nice options for sprite-based parallax in the foreground. They might have thought adding a higher color low-res mode was silly, but realistically it probably would've had a lot of useful situations, but on top of that would be great for 3D games. And it's certainly no more silly and no less useful than the lowres 16 color (or 9 color) GTIA modes. Mirroring actual GTIA modes would've been a 16-bit pixel mode using 80x192 pixels with the full 4096 colors. Or, better than that, 160x192 with 4096 colors using max CPU cycle stealing. Well, except you only have 1600 bits of line buffer space or enough for 133x12-bits max. In terms of chip space and expense, special low-res high color modes would've been much cheaper than the rotation feature, though having SNES style full screen rotation or texture mapped ground planes would have certainly been impressive at the time, even more than it was in 1991. Though technically you could also do some of that in software, especially in a low-res 8-bit chunky pixel mode. (easier if you gave the CPU a hardware multiplier like in the Lynx ... or SNES, but much faster than that) Also I'm not convinced the DRAM shortage alone killed this project, since it would've been super easy to just use 2 32kx8-bit SRAMs to populate the 2 RAM banks and then not even have to worry about DRAM interfacing. Keeping PSRAM open as an option would've further expanded this while using the same pinout and packaging as SRAM, so the same motherboards could use either (you just need some refresh logic to allow PSRAM to run its fastest, though scanning out the screen data could auto-refresh that and DMA sound cycling could probably handle it during vblank). 64 kB of SRAM would still be enough to keep it backwards compatible with the XEGS and 65XE, though PSRAM would've kept the price point closer to that of DRAM (or possibly cheaper than DRAM during the height of the shortage). DRAM is inherently cheaper to make, but it's also the only type of RAM that plugs into all those PCs and clones in 1988/89 where SRAM and PSRAM do not. (aside from 8-bit or 16-bit ISA bus cards populated with SRAM, or proprietary RAM upgrade slots on some boards, but almost all of those were populated with DRAM, too ... though I think the IBM convertible used SRAM upgrade cards) And hell, using SRAM as main RAM in home computers was an old Tramiel thing when it was the cheaper option for one reason or another (usually surplus stock with nothing better to use it for). That's what the PET and VIC-20 used. But also since this was a computer, not just a game console, and it would've used the same DRAM chips as the 65XE, 130XE, and Lynx at the time, with 128 kB minimum (2 64kB banks), this shouldn't have been a deal breaker.
  7. I managed to miss the reveal of this hardware, but it's super interesting. It appears to be the early stages of what became the Atari Panther, and given the 5-bit (32 color) line buffer and palette limitations, the "GAME SHIFTER" was probably designed for the Super XE first and re-used for the Panther. Interesting it says 320x8 in the document with the 8 crossed out and 5 written, so they might have been planning on full 256 color palette before cutting it to 8, or that was a typo. It also mentions using an 8 MHz 65816 (when working in fast SRAM), and 4 MHz in slow memory. Both of those are blazing fast and at 8 MHz, faster than the 16 MHz 68000 in the panther, at least for assembly language programming and for most uses in games (ie not making heavy use of 32-bit arrays or multiply/divide instructions ... except at 8 MHz, software mul/div functions would probably be faster too with some small look-up table optimization). This is really, really fast in general, not just for a 650x. The PC Engine's 7.16 MHz 65C02 derivative is already faster at a number of things than the Mega Drive's 68000, but the 65816 is significantly faster for 16-bit operations and would remove some of the disadvantages. (the 68000's ability to use much slower RAM and ROM without wait states would be a big factor and I'm not sure how Atari would work around that unless they used slower ROM and copied it into RAM for the CPU to work in there) Given they were partnering with Ricoh for production, the 8-channel PCM chip would almost certainly be the same one used in the FM Towns, Mega CD, and several arcade boards (including Sega's System 32). It's vastly more cost effective than the Sony DSP based SPC module used in the SNES, yet for many purposes it wouldn't sound much worse and in many cases would sound similar or better, especially later games that could afford to use higher quality samples in ROM. (it uses 8-bit linear PCM, uncompressed, but you could use ADPCM in ROM and decompress it into sound RAM, but best case is good quality 8-bit PCM in ROM for games that can afford it: in that case they will be cleaner and higher quality sounding than the compressed, interpolated, and heavily filtered SPC ADPCM format samples ... honestly Nintendo should've used it, saved money there, and used that for faster DRAM and a faster CPU clock rate). Note this is also a full 3xx as fast as the SNES CPU. The SNES CPU when working in RAM or normal (slow) ROM is limited to 2.685 MHz (actually NTSC chroma clock x3/4 = 2.68466 MHz) and assuming they used the same 32.215905 MHz system clock as the STe and Panther, then 8 MHz would be 8.053976 MHz or NTSC chroma x9/4. So it's literally exactly 3x the clock rate of the SNES. Note the SNES's CPU can run at 3.579545 MHz (1x NTSC Chroma) while accessing PPU registers and in fast ROM, but for some reason they opted to use 128 kB of DRAM too slow to run even at that speed, or the DRAM control logic in the system ASIC just couldn't manage to be fast enough. This also makes it impossible to copy data and code from ROM into RAM to make it run faster. Also note that NEC had already used DRAM instead of SRAM for the PC Engine CPU at 7.15909 MHz in the CD-ROM expansion unit (it uses 64 kB of 70 ns DRAM, cycled at 7.15909 MHz and with access time fast enough to work with the 650x timing) and that was in 1988, not 1990/91 like the SNES. NEC later switched to SRAM for the Super CD upgrade, but that was likely only cheaper to do because of vertical integration and their own SRAM production. (there's few other situations where 256 kB of 100 ns SRAM would be cheaper than 256 kB of 70 ns DRAM). Even if the DRAM controller was the problem, they could've used a smaller amount of SRAM and been better off, or more likely, PSRAM as that's cheaper and was widely available by 1989 (the SNES uses it for Sound RAM in most cases, the Mega Drive uses it for CPU RAM). PSRAM = Pseudostatic RAM = DRAM cells + embedded DRAM control and refresh logic put inside an SRAM compatible package and mostly a direct replacement at around 1/4 to 1/2 the cost compared to SRAM. (in terms of silicon chip space used, it's closer to 1/4, but you still have the higher pin count of SRAM, so it's not going to be as cheap as the smaller, multiplexed address true DRAM chips, though the DRAM shortage of 1988-1990 made PSRAM and SRAM more attractive for a while on top of that as SRAM and PSRAM production facilities didn't directly compete with normal DRAM, but there was other stuff going on at the time like a DRAM-specifric price floor set by the US for Japanese imports, but that was only for bare chips, not assembled hardware using the chips) 650x type CPUs need memory cycled at their clock rate and access times somewhere between 1/2 of a cycle to 3/4 of a cycle. Datasheets usually show times much shorter than 3/4 of a cycle for all speed grades over 2 MHz, but actual hardware products using the CPUs seem to fare better than this, like NEC's case using 100 ns SRAM without issue for a 7.16 MHz 65C02 (139.7 ns cycle times, so 100 ns is .716 cycles). Most 65C02 and 65816 datasheets I've seen show 70 ns access time requirements for 8 MHz, which would imply 80 ns for 7.16 MHz. Atari and Ricoh likely knew the more realistic access time limitations and were working around that, but even so it's extremely tight timing compared to x86 or 68k type chips, or Z80 for that matter. In the broadest sense, 650x type CPUs need memory running 4x as fast as a 68000 at the same clock speed, like with the Atari ST's memory timing, the best you could get is a 2 MHz 6502. A 68000 needs memory access within 2 clock cycles vs 1/2 of 1 cycle on the 650x (worst case). The 68000 uses 4 clock tick machine cycles and bus cycles, so memory needs to respond in 2 cycles, but can take 4 CPU cycles between accesses (or as in the ST and Amiga, memory can cycle at 2 CPU clock cycles with DMA access periods interleaved in the second half of that 4 CPU clock cycle period; some 650x machines do the same thing; in fact, the BBC Micro uses almost identical DRAM cycle and DMA cycle timing as the Atari ST, but with a 2 MHz 6502 and 8-bit bus so half the bandwidth of the 16-bit bus) The 8088 and 8086 (and 188/186 and V20/30) also use 4 clock tick memory cycles like the 68000, but don't need access until the end of the 3rd clock tick or 3/4 of a full bus cycle, more like the best-case scenarios for a 650x or standard scenario for 1 or 2 MHz parts based on data sheets. (I believe the Atari 8-bit DRAM timing is based around this, with data latched 3/4 into a 1.79 MHz cycle and DRAM cycling at 1.79 MHz; The Apple II and C64 cycle DRAM at 2 MHz in order to interleave video DMA with CPU cycles at 1 MHz, ie 1/2 the speed of the BBC Micro or Atari ST) The 65816 unlike the 6502/C02 has a multiplexed address and data bus for the upper 8 bits of the 24-bit address space, so this also requires an external address latch which can potentially add additional memory delay and mean faster RAM or ROM than a non-multiplexed equivalent. A fast enough latch should allow effectively similar timing to a non-multiplexed bus, though, especially for DRAM or PSRAM where the delay can be hidden during precharge. Also note that for CPUs like this where access time must be faster than cycle time, SRAM and ROM isn't ideal as SRAM and ROM both cycle as fast as its access time. This is great if you do a fancier memory system with interleaved sharing of DMA cycles with CPU cycles and use RAM exactly 2x as fast as the CPU, but means a simpler non-interleaved system has to use faster SRAM than the minimum cycle time. This is not the case for 286 and later x86 processors or 68030 and 040 (those all do 2 clock tick access times and cycle times). In any case, that makes DRAM even more appealing than it otherwise would be since DRAM can be accessed significantly faster than its minimum cycle time, and the same is true for PSRAM (since it's DRAM based). For example, 150 ns PSRAM can be accessed in 150 ns, but cycles at around 235-250 ns (depends on manufacturer), or 120 ns access and 210 ns cycle, 100 ns access and 160 ns cycle, 85 ns access and 135 ns cycle, 80 ns access and 130 ns cycle. Applications that need PSRAM to behave like SRAM need to use chips based on cycle times and not access times, but for cases where you actually need a faster access time, that makes DRAM and PSRAM even more efficient compared to SRAM. Now, note how this Super XEGS intended to use DRAM, just like the 8-bit computers and dual banks of DRAM (I don't see it in the document, but I suspect 2 banks of 64kB, or 128 kB total), and if operating at 4 MHz you'd need an access time of ~150 ns (datasheets for the 65816 say 130 ns, but that's probably excessive in practice and even 150 ns is probably conservative where best case it might be around 180 ns). In any case this means 150 ns or faster PSRAM and 120 ns or faster DRAM, 100 ns if they didn't use an especially fast DRAM controller and address latch. Note: late model XE systems and all XEGS units I've seen motherboard pictures of already tend to use 120 or 100 ns DRAM, so this should be a non-issue cost wise. The docs say 250 ns DRAM cycle time, so they're probably using 120 ns DRAM like the Lynx and a lot of STs used. (early STs used almost entirely 150 ns DRAM and pushed it slightly out of spec cycling 260 ns rated RAM to 250 ns, but it worked without issue in practice; the ST MMU doesn't latch data early as you'd need for a 4 MHz 650x or 16 MHz 68000, it only needs to latch data at 250 ns for the 8 MHz 68000 and SHIFTER DMA cycles don't need early access either; from the investigation I've seen, the ST MMU simply does 250 ns cycle and access times) 120 ns DRAM can be cycled faster than 250 ns, but that's not important here as 250 ns is the CPU cycle time and all you need is fast enough access time. (if it used a 16 MHz DRAM controller with 2-phase clock like the ST used, for effectively 32 MHz pulses or 31.25 ns, you could do 125 ns RAS + 31.25 ns CPU to MMU to DRAM address delay time or 156.25 ns which is probably good enough for 4 MHz) They probably just use similar DRAM timing to the Atari Lynx, probably not supporting page mode cycles like the Lynx unless maybe for the Object processor. However, due to the DRAM shortage in 1988, at some point Jack Tramiel ordered that any game console must be based around SRAM, or at least not use DRAM (this should have made the new, at the time, PSRAM a consideration though, but the Panther didn't use that). This likely led the Super XEGS to being cancelled and the SRAM based Panther to be the follow on project. The Lynx being DRAM based was probably bad enough at the time. They also obviously abandoned backwards compatibility to simplify design and keep costs down. They used 120 ns SRAM for the Panther on a 32-bit wide bus, but just 32kB, and in 1989 that's not too bad, but by 1991 when the Panther was cancelled, that's really skimpy due from a cost and price to performance ratio with 32kx8-bit DRAMs and 120 ns speed being in the low end of things and a better value than the 8kx8 chips in the Panther (you need 4x 8-bit wide chips to make 32kB and 16-bit wide SRAMs weren't available at the time so 64KB would've used 8 8kx8-bit chips, even less cost effective and using 2x the board space). Then again, they should have just switched from using DRAM to PSRAM instead. They should've known it existed and been looking into it as an SRAM alternative from 1986/87 when promotional material and publications were coming out, then more so as a DRAM alternative in 1988 with new trade restrictions and the DRAM shortage due to poorer than expected 1Mbit chip yields among other things. (PSRAM is also called Virtually Static RAM early on in 1986/87 articles and still in some datasheets later on, also called XRAM by NEC) Given Atari ended up using VLSI for Lynx chipset production and not Ricoh (both of them had licenses for 6502, 65C02, and 65C816 cores), there must have been a falling out between Atari and Ricoh by 1989 when Lynx production started and/or VLSI just offered better deals than Ricoh. Atari also made a deal with Ensonq at some point in 1988 as well, which might have complicated the Ricoh sound chip situation (and might also by why Atari never used a Yamaha FM synth chip in the ST or console designs) or that might be wrong too and Atari might have kept all their options open and made sure any partnerships or deals made weren't mutually exclusive, especially before any product actually went to market. (ie keep multiple options open until something actually goes into mass production ... negotiaions with multiple chip vendors while also looking for an in-house chip vendor to use, same for sound chip manufacturers, etc: though I honestly doubt anything would've been as cheap yet still "good enough" as Yamaha's OPLL or YM2413 ... a cut-down version of what Adlib and Sound Blaster cards used, same one as in the Japanese Master System and probably would've easily beaten most PC Adlib/SB music if used by the more talented UK/European chip composers ... or Japan, but Atari would've had more of an uphill battle getting Japanese developer support, I think; then again after the Nintendo anti-trust lawsuit they might have had more room to sign on Japanese 3rd parties ... but even so without a platform actually having a Japanese market presence. Then again, maybe they could've partnered with a Japanese company to market the new console) OTOH it's a shame they didn't just re-use MARIA as-is with a fast enough DRAM controller to support it (more likely use DRAM as ROM for graphics data at MARIA's 3 clock cycle = 419 ns requirement and use SRAM for display lists and high speed CPU access) or just a single 32kB SRAM chip like a few 7800 games already used on cart (Summer and Winter games both do, only using 16kB of it as they don't make 16kB chips and couldn't fit 2 8kB chips without using a larger/custom cartridge size, apparently more expensive than 32kB of SRAM in 1987/1988). You'd want more than 32kB for a MARIA based computer, though, so DRAM needed there and probably just use a single 8kB SRAM chip. Also, I believe MARIA only needs SRAM to cycle at ~279 ns (2 MARIA clock ticks per access) given lists take 2 clock cycles per byte to process, but it may still need access time in 1 clock tick, this still means you could add an external memory controller that allows 50/50 bus sharing with MARIA SRAM without any wait states (7800 has to halt the CPU during all MARIA bus activity time in RAM or ROM, so this would be much faster even with the same 1.79 MHz 6502). Additionally that 50/50 SRAM sharing would play to the strengths (or weaknesses) of 650x access timing and you'd have 279 ns synchronous memory cycle slots with 120 ns access times, which means 3.58 MHz could easily be obtained with a 65816 (ie faster than SNES or Apple II GS). Outside of MARIA bus time, the CPU cpuld be clocked faster, 4.09 MHz conservatively (7.16 MHz MARIA clock x4/7) but likely 4.77 MHz would be fine (7.16x2/3) and possibly even 5.73 MHz (7.16x4/5). Additionally, MARIA doesn't access SRAM while doing graphics reads (ROM in the 7800, and ROM or DRAM here) so the CPU could potentially access SRAM in high speed mode during MARIA ROM/DRAM reads. You could also interleave DRAM or ROM access, but for that it would need to be cycled at 209.5 ns (MARIA graphics data reads are in 3 clock ticks or 419 ns) which means 200 ns ROM would be needed rather than the cheaper 250 ns used by the Mega Drive at the time. (You could reserve that as an optional FastROM setting) Still that means dropping the CPU to 2.3866 MHz for interleave, but vastly better than being halted and faster than the 7800's or XL/XE CPU. You'd also need 100 ns DRAM to achieve those cycle times without running faster than spec (with the exception of some CMOS 120 ns DRAM cycling faster than that, so Atari could've used a mix of 120 and 100 ns DRAM that fit the required timing). Or if you don't want to deal with interleaving, just use 120 ns DRAM with 139.7 ns access and 279.4 ns cycle times (or 120 ns PSRAM) with potential to run faster cycles when MARIA is idle/off the bus, possibly 209 ns for 4.77 MHz CPU speed. (using PSRAM for this would be easier and not depend on a fast and efficient DRAM controller) Though if you don't need 650x backwards compatibility, a 3.58 MHz Hitachi 6309 would be interesting and a Motorola 68008 at 9.545 MHz would be worth considering (209.5 ns access 419.5 ns cycle time, so it could interleave fully with MARIA in DRAM or ROM cycled at 209.5 ns) plus be just as fast as a 68000 for 8-bit operations (lots of MARIA list operations would be modifying byte-wise data) and work on cheaper 8-bit wide ROM but half the speed on most 16-bit and 32-bit operations (multiply and divide would still be faster than an 8 MHz 68000 due to how slow those instructions are). That or if they wanted to have more software compatibility overlap with the ST architecture and a cut-down version of TOS ... sort of an EST (8/16/32 except EST would be terribly confusing with the STe name). Granted the 6309 is also technically a hybrid 8/16/32-bit architecture. Note: to even make use of interleaved SRAM speed along 279 ns bus cycles, you'd actually need a 14.318 MHz 68008 (139.7 ns access and 239.4 ns cycle) and then you could run at up to 16.7 MHz when not interleaved (120 ns SRAM). Now such a system couldn't be cost effective if it had to be both 7800 and XE backwards compatible and would be cheaper still (and/or better performing) neither of those, but just re-use existing Atari custom chips (and off the shelf MOS compatible I/O chips) where advantageous. The design philosophy here is quite divisive. Albeit given they'd already done a 2600-on-a-chip, and given the simpler hardware of the 2600, doing the same thing on a new chip that replaces the 6502 with a 65816 and as much added I/O and sound hardware as you could manage would the the cheaper option, and cheap enough to not discard all backwards compatibility entirely. Now THAT said, it's also worth noting they could have done what Commodore did with the PET to VIC to C64 (and C16/Plus4) and still have software compatibility at high level for all programs using BASIC or running through system OS calls. So a MARIA based computer that used compatible Atari DOS and Atari BASIC and could run tape and disk programs that ran in BASIC or in Atari DOS. And stil use the Atari SIO port for standard 8-bit peripherals, or some mix of SIO and Atari ST/PC compatible interfaces. (more likely SIO + Eenhanced Cartridge Interface with the latter allowing further expansion via modules if Atari bothered ... not actually use Atari 8-bit ROM carts, but the same expansion port and pinout, or just a PBI style edge connector) And even then they'd have had fewer confusing, overlapping products than Commodore. Aside from that, they could have designed a new system in parallel with a 7800 expansion unit designed to provide similar features, but this is messier and difficult to do efficiently. Still it probably could've been done OK if they just had a bunch of redundancy on the expansion module (65816 + added RAM + added I/O hardware) and forced software to not use portions of the 7800 that would be eliminated as redundant while still being at least slightly cheaper to produce than an all new system. (that might include duplicating TIA+RIOT onboard the expansion unit in order to access all of those I/O registers at full speed rather than slowing down as the 7800 has to) And even then you have the down side of the 7800's muddy video output (due to the way TIA+MARIA video are coupled) and RF only output. At some point it's just going to be cheaper to not offer an expansion module and just release an all new system, but that really depends on resources needed to support 2 separate hardware releases vs market appeal of an upgrade vs new system, and Atari had a very solid install base of 7800 owners, selling many more of those than 8-bit computers or STs at the time (at least in North America) with over 3 million sold in North America by the end of 1988. Also it's a shame they didn't use the MARIA style display list and object list architecture in souping up or replacing the ST SHIFTER, as part of the STe, potentially using the same 5-bit (32 color) line buffer as the Super XEGS and Panther used. For that matter, they could've used the same chip for an 8-bit bus console and on the 16-bit ST. Either you have an 8-bit data bus on the graphics chip and have the memory controller feed it 2x as fast reading from a slower 16-bit bus or you use a 16-bit bus on the graphics chip and have an 8-bit memory controller in the console feeding it 2 bytes in series through a FIFO. (just make all lists and data sizes multiples of 16-bits or 2 bytes and you're good ... hell even with MARIA all you'd need to do was extend DLLs from 3 to 4 bytes and 5-byte DL headers to 6 byte, and keep the 4-bit DLs unchanged, then have graphics data all point to multiples of 2 bytes rather than 1 byte and you're good to go) Now it would either need to natively support ST style bitplanes and MARIA style packed pixels (and weird MARIA pixel formats in some 2bpp and 4bpp modes) or not be backwards compatible with both. Given both formats (aside from MARIA's funky nonlinear modes) are useful and advantages for different situations in video games and in computers, that they should have just supported both formats. That way 5 bitplanes could be used for 32 color objects (or 31 color + transparent) with maximum RAM/ROM efficiency (at least when uncompressed) and 3-bit objects could also be supported rather than just 1, 2, 4, or 8 bits per pixel as packed format would limit you to. Focusing on hardware that could be equally valuable and useful across the 8-bit computer, ST, and game console sides of things seems like a big win in terms of optimal use of R&D. Then the best cases is the same hardware gets used in multiple platforms, and if not that then at least you don't spread yourself as thin with parallel projects (STe, TT, and Super XE all in 1988, Panther and Falcon in 1990, Falcon and Jaguar in 1991/92, all of which used different graphics chips and memory controllers ... which comprises most of the custom chips in the ST family). Plus a 16-bit souped up MARIA working at ST or better resolutions would be a genuine Amiga beater (even with the 32 color limit it would beat OCS Amiga graphics for games, art/graphics, and productivity applications, especially if the line buffers were taken advantage of to do line doubling of low-res modes at VGA monitor res, so you wouldn't need multi-sync monitors to run the hi and low res modes, just a standard VGA monitor). There's a variety of other ways MARIA could've been upgraded less dramatically but to very serous effect like just making parts of it faster or some data fetches faster (like 2 clock ticks for character map and graphics reads instead of 3 ticks) or expanding CRAM to allow for 4 15 color palettes rather than 2 12 color ones (line RAM is already 320x3-bits in hires so should be 160x6-bits in low res, but it only has enough CRAM for 25 entries total for 61 colors in 4bpp mode. (8 logical palettes are supported, but only some modes can use all 8 palette selections due to line buffer and CRAM limitations: in 320 width mode you're limited to 8 colors, that's 8 1-color palettes, or 2 3-color palettes). Using 8 15 color palettes would be possible, but you'd need 7-bit wide CRAM instead of the existing 6. A 240 pixel wide mode with 16 colors would be possible with the existing line buffer and palette space, but would need new modes using a 5.37 MHz pixel clock (like the SMS or NES or Colecovision, etc) instead of 7.16/3.58 MHz. This might also complicate color generation as the chroma/luma system might depend on integer multiples or fractions of the NTSC chroma clock. Or, mostly relevant to computer uses, add 640 pixel wide 1bpp (2 color or B/W modes). Given NTSC artifacting, you'd want to be able to disable colorburst for clear high res graphics/text like CGA allows. (MARIA uses separate chroma and luma outputs already, so you'd just need to terminate the Chroma output, like plugging in the luma line form an A8-bit or C64 to a composite monitor or TV) Then you can do everything CGA graphics modes can do and a lot more ... but you can't do 16 color 40 or 80 column text modes like CGA can (or 16 shade grayscale text in luma only composite mode). But then the Atari ST also can't do 16 color 80 column text modes like CGA or Tandy video can. You can do 40 column 4 color text on MARIA or 16 color text even if you use 4x8 character cells in 4bpp mode. Technically, you can do 80 column 4-color text using MARIA via a 320x200x2bpp framebuffer (not enough DMA time to do 80 column in character mode) but you'd be using 4x8 characters there as well. OTOH you at least can render them quickly as they're all 1 byte wide (2 bits per pixel) where you have to do bit manipulation for the same sort of 80 column mode on GTIA 320x2001bpp. (plus you get 4 colors, or 4 colors per scanline, for more colorful text or more shades at least in grayscale mode, potentially useful for anti-aliasing of the low-res characters: 4 shades of gray is pretty decent for doing some basuc antialiasing) If nothing else, a MARIA based 8-bit (or 8/16-bit) computer could easily manage to beat the C64's graphics (for games or otherwise) as well as the Atari 8-bit computer, and has some advantages over the Atari ST itself, especially when you start considering RAM and ROM limitations and tricks/compromises used on some ST games for software scrolling effects or hardware transparency by sacrificing colors. (using pre-shifted sprites and background data uses up lots of RAM ... try doing that even with 128 kB of RAM, let alone 64kB and with sane ROM sizes, especially on the lower cost end Atari Corp tended towards) For that matter, with small-RAM limited situations, MARIA probably beats out the Amiga chipset as well, even once you have enough for some framebuffers, now MARIA can do framebuffers as well at 320 wide up to 4 colors or 160 wide up to 13 colors (plus more colors for sprites) vs Amiga not allowing 160 pixels (or not at 3.58 MHz pixel clock) so would be stuck with using fewer bitplanes at 320 width or at least 256 width to avoid using too small of a screen (or maybe a bit less if you use a 1-bitplane side-bar style scoreboard/status bar or something) Or technically you could still do 320x200x4bpp (or slightly less visible) if you single buffered with a looping scroll in RAM, but it'd be tight and require 60 Hz blitter/screen updates to avoid artifacts. Drop that down to just 32kB and maybe you could manage 3 bitplanes single buffered with less than 8kB of RAM left for the CPU to use. (though realistically, the Amiga was never going to use less than 128kB ... and if you did do 64 kB it'd have to be using 16kx4-bit DRAMs, though technically you could use 4 16kx4-bit chips to do 32kB 16-bits wide ... but had Atari Inc been planning on releasing an Amiga based game console in 1985, it probably would've had at least 64kB) Though, to the Amiga's credit, the flexible display list nature of its framebuffer, plus the 8 hardware sprites does make it surprisingly viable as a game system all the way down to 64 kB. And given you'd be working in ROM, you could use almost all the RAM for a framebuffer ... so even with just 32kB you could technically do a 256x192x5-bitplane title screen with limited animation ... or maybe a little more or less than that, I'm not sure how much you'd use up with Copper lists and bare minimum CPU work RAM. (infinitely more than you could do with an ST with just 32kB of RAM) Of course, none of that considers the cost of the chipset vs RAM and whether so little RAM was ever worth it relative to total system cost even in a bare minimum game console implementation. (ie DRAM prices dropped far too fast in 1985 for this to be a real consideration) Then again, Atari Inc had actually been considering releasing an Amiga based system (console or computer) in 1984 had the chips arrived on time ... instead of the Amiga team claiming the silicon was bad, then refunding Atari's license while in negotiations with Commodore (and while Atari Inc was such a mess that someone cached that check without reporting it to upper management, let alone James Morgan, and all within a month of Warner liquidating the company outright) Of course, Amiga ended up renegging on all other investors/licensees of the chipset in order to sell to CBM. I realize this is an old post, but it's something I see come up quite often. The XEGS and XE family as a whole wasn't designed to use up existing stockpiles of anything. The XE may initially have used some old stocks of RAM and Atari chips in inventory, a very large number of them were built with newly manufactured components and by 1987/88 (around the time of the XEGS's release) they were using all new components based on the date stamps and manufacturer names on the custom chips and off the shelf parts. (I'm not 100% sure on POKEY chips, they may have had a bigger stockpile of those than other 8-bit computer parts for some reason ... possibly due to the heavy use in arcades and plans to use them in 7800 carts) They has just introduced new revisions of the XE motherboards at about the same time as the XEGS, switching to using two 64kx4-bit DRAMs instead of 8 64kx1-bit DRAM chips. (or 4 vs 16 in the 130XE) and the XEGS was based on the new DRAM type. The Lynx also ended up using the same DRAM density and, often, the same mix of parts as most if not all the XE computers of that era used 120 ns or faster DRAM (almost 3x as fast as the chipset actually needs). Actually, if they'd updated the chipset internally, those XEs could've been running comfortably at 2x their existing speed based on the DRAM used. (a lot like early 90s Amiga 500s and 600s using 70 ns DRAM on the motherboard and cycling at less than 1/2 the rated speed) Side-note, but at least in the case of the ST, the existing MMU (especially of later production dates) can already run at 2x speed and support a 16 MHz 68000 and, potentially, a 64 MHz SHIFTER with double res video modes (ie 640x200 16 colors). I'm not sure if the 8-bit chipset could be modified in that way or the Amiga. The STe can't use the same trick as it requires the GLUE chip timing to be separate from the MMU and SHIFTER. 70 ns DRAM works, some types of 80 ns does as well. (I haven't seen any STF/STFM models using faster than 100 ns chips though, I think they production shifted to the STe too early for that to happen) Though the STe might allow the same overclock if you could provide external video sync and blanking times using the GENLOCK mode features. (in this respect it should actually be easier to do than on the STF, but can't be done by just overclocking existing chips on the board: all you need for the old ST/STF is to synthesize and possibly buffer appropriate clock signals from a common, synchronous source) See for the successful implementation of doubled resolution modes on an overclocked ST SHIFTER: https://blog.troed.se/projects/atari-st-new-video-modes/ Anyway, for that same reason, the DRAM chips being used in the late model XE computers and XEGS were already fast enough to be used as MARIA ROM and probably as MARIA RAM as well (I'd have to ask 7800 programmers about this, particularly if DLLs are read faster than DL headers, in which case you'd at least need SRAM specifically for DLLs but not DLs).
  8. In addition to what Crazyace already said about cost, there's more specifics to consider with PCs. By 1989, there were a number of highly integrated PC compatible chipsets, some very low cost ones specific to XT-compaytibles, single-chip ones with minimal features in most cases (though there's one oddball Citygate branded chipset that appears to actually use a 4-bit wide DRAM bus to feed an 8-bit latch for a 10 MHz 8088, probably using page-mode to do 2 quick 4-bit accesses before the CPU needs the data, with the added side effect of more flexible memory size options as DRAMs can be installed as 4 sets of 1-bit wide chips or single 4-bit wide chips up to 640kB ... or I think 640k, as I don't think it has UMB support). The more relevant chipsets would be the single-chip 286/AT class ASICs also getting common by then and continuing into the early 90s (some transitioning use as 386SX chipsets, though many of those are 386SX specific and actually lack some features of the 286 ones, like hardware EMS bank-switching). AMD even made a single-chip 286 core inside an AT compatible ASIC that was available around 1990 (I'll have to check the catalogs to be sure). But the problem here is, even if you can get a good deal on those chips as they're falling out of mainstream demand for desktop PCs (AMD 386s rapidly drove down prices/costs of 286 systems into the low end after 1991 and Cyrix's 486SLC and DLC in 1992), there's no graphics chipset on those, and on top of that, a lot of extra chip space going to 100% PC compatibility with unnecessary cost going to that. However, the other area would be looking to turn a PC-oriented mass market graphics chipset into a game console. You have various generic VGA chipsets for that as well as some blitter-equipped hardware accelerators of the period, with the most common and cheapest available I can think of was ATi's Mach 8 series. See: https://en.wikipedia.org/wiki/ATI_Mach_series#Mach_8 Except that's still going to be too expensive or too late to be relevant here. And if Atari could get a good deal on them, it would've been much better introduced into their computer line. It's an IBM 8514 clone with 640x480 256 color support (ie beyond VGA and into the SVGA umbrella of features). Granted, for TV resolutions, you only need a fraction of the clock rate for the video output and video DACs, so there could've been some potential for something lower cost earlier on using failed yield parts at much reduced clock speeds that still work perfectly fine for 320x200 or 320x240 in 256 colors at TV sync rates. (but then you need Atari to work towards that sort of partnership and decide to go that way rather than in-house designs; the latter always having an advantage provided you can build enough to have economies of scale take over) More basic VGA chips are also overkill resolution wise and could have similar underclocking potential for TV-res-only functionality and you had a decent handful of companies with single chip VGA chipsets around 1990, but VGA also has a very limited feature set for acceleration (namely hardware scrolling, some copy and line fill functions, I think, and mask registers to help with blits) and a weird non-linear pixel organization in 8-bit chunky pixel mode (256 colors with VGA features ... not the simple 320x200 mode 13h that works like MCGA as a linear framebuffer with no hardware acceleration, double buffering in RAM, scrolling, or anything) And besides that, any sort of partnering with PC graphics chip manufacturers of the time would more likely lead to developing a game-console specific chip, even if it was derived from one of their existing parts. But Atari doesn't seem to have had interest in partnering in that manner, or may have just had their hands full with the partnerships they were already attempting. (they had some sort of deal with Ensoniq for sound chips and some sort of collaboration with Inmos with their transputer and their RAMDAC used in VGA chipsets in the Atari PC line, and they were really excited about the potential for the Transputer at one point, their chief Engineer Shiraz Shiviji said as much in an informal interview at a trade show in 1987 or '88, but the Transputer itself ended up largely failing on the market and failing to manage the yields and low costs that were hoped by Imos or Atari, albeit I'm not sure Atari ever considered the bottom-end 16-bit models of the Transputer for more embedded applications or as a coprocessor on the ST, or as part of a Game console compared to what they ended up doing with the Transputer Workstation they did release in small numbers, later than planned, and at a price I suspect was much higher than they'd originally envisioned) As for the VGA chip in the Atari PC 4, see below: http://www.ataripc.net/pc4-286/ It's a Paradise PVGA1A-JK http://www.vgamuseum.info/index.php/cpu/item/479-paradise-systems-pvga1a-jk Released in 1988, a single-chip implementation of VGA (or 2-chip if you include the RAMDAC), DRAM based, 256 kB to 1MB supported and probably has some extended VGA modes to make use of the extra RAM, but at very least is fully VGA compatible. Now, this most likely was just an off the shelf part Atari got a good/low bid for in quantity and they probably didn't have any extended or special relationship with the Paradise company. If they had gone the latter route, they would've been wise to plan to use related hardware in both upgraded STs and game consoles rather than continuing to focus entirely on in-house custom hardware for the STe, TT, Falcon, Panther, Jaguar, etc. (albeit I'd argue the Jaguar chipset couldv'e easily been used partnered with a VGA chipset similar to the way 3DFX Voodoo 3D accelerators were later used as a companion to a standard VGA-compatible 2D accelerator) In hindsight, Atari probably would've done better had they just used that same VGA chipset in an upgraded ST (or Mega ST) model in 1988 as it's better than what the TT video is capable of and came 2 years earlier (though that too was delayed), and much better than what the STe video is capable of. And VGA is programmable, so aside from VGA monitor resolutions, it could easily be programmed to use TV sync rates for low resolution stuff. You'd also still need the ST SHIFTER for backwards compatibility, but that's a fairly simple and low cost part that also could've been integrated into one of the other ST chips later on. (the original ST shifter, not the upgraded STe SHIFTER) All that said, if Atari had really focused on engineering better in-hosue graphic chipsets earlier on, that should've still been much more competitive than outsourcing. And if or when they worked on a new game console graphics chip, it should have been something equally useable on their computers, either whether it was a backwards compatible upgrade of the ST SHIFTER, or a separate chip intended to be used in addition to the SHIFTER. For whatever reason, that didn't work out, but I will say the general appropach they took with the Panther wasn't wrong, just late and over complex. (compared to looking at the MARIA chip in the 7800 and realizing the object list + line buffer combination could work very well for a greatly enhanced replacement for the SHIFTER, using the ST bitplane format but supporting more than just the 1, 2, and 4 bitplane modes of the ST) Also, given the Game SHIFTER of the Panther (the line buffer and palette chip) is dated 1989 and was designed well before the object processor was complete, it's possible they even had alternate design intents for that chip as well. While I haven't seen anything indicating that GAME SHIFTER chip was intended for use in the ST family, it's possible the Panther itself had originally been intended to be more like the ST or Amiga and use bitplanes rather than packed pixels. The 5-bit wide line buffer and 32 color limit would make plenty of sense if bitplanes were used, and you could then use 1, 2, 3, 4, or 5 bits per pixel for 1, 3, 7, 15, or 31 color sprites (+ one color reserved for transparency). This is actually much more flexible than what the Panther actually does and would have been fairly easy to design to also be backwards compatible with ST SHIFTER modes (more work for the STe and TT modes, though, but STe compatibility would also include DMA sound support and a decent cheap sound option compared to the Ensoniq sound chip). The Master System, NES, PC Engine, and SNES (except for Mode 7) all also used bitplanes. The Mega Drive and lynx use packed pixels. The Atari 8-bit computers are also packed pixel format, as is the 7800 (mostly, though some pixel modes are weird, none are true bitplanes and at least some are standard linear packed pixels). CGA uses packed pixels, Tandy graphics modes use packed pixels, EGA uses bitplanes, VGA uses bitplanes for 16 color modes and chunky pixels for 256 color modes. The panther itself used packed pixels, AKA chunky pixels, but the 32 color mode uses 8 bits per pixel with 3 bits wasted. Likewise the run length object mode uses 2 bytes, with only 5 of the first 8 bits used for 32 colors and 8-bits for 1 to 256 pixel run length. (there was also no compact run-length object using just 1 byte: ie 4-bits for color and 4-bits for run length for 15 colors and 1 to 16 pixel run length, which probably would've been very useful; 5-bits for color and 3 bits for 1 to 8 pixel run would be interesting but logically more complex to split up than just cutting a byte in half) Packed pixels are much easier to work with in general and usually easier to design display hardware around, but bitplanes make more efficient use of memory at variable bit depths and allow fast manipulation of 1 bitplane at a time, so if you want to draw monochrome 1-bit characters and use the same drawing routine in all color depths, bitplanes work great (I assume this is why they went with it on the ST, since the ST SHIFTER doesn't allow really flexible and efficient memory usage like the Amiga does with display lists, but the ST can use the same character set and 1-bit monochrome graphics in all of its resolution modes and equally fast in all modes). Hardware scaling in particular is a pain to do with bitplanes (and any sort of texture mapped or smooth shaded 3D, less bad for flat shading, but still slower), so at the point they decided scaling was a key feature in the Panther, they likely abandoned bitplane graphics at that point rather than trying to use both formats (like the SNES does ... its Mode 7 feature is more or less a separate graphics processor entirely). This makes sense, but at the same time is unfortunate in terms of ST development potential, as a simpler 16-bit extension of the MARIA display list instructions using ST bitplane format but with 1 to 5 bitplanes would've been better than the Amiga in most respects, though better still if they did things more like MARIA and let the line buffers be re-configured to different bit depths natively. (MARIA uses 320x3-bit or 160x5-bit line buffers, or probably 160x6-bit for simplicity but only 25 colors are supported per scanline, so only 5 bits is actually needed ... and this also means the Panther line buffers are not even 2x as big as MARIA's from 1984, which is a bit sad ... where the Jaguar's are absolutely massive by comparison at 720x16 or 360x32 bits). They really should have built MARIA-like features into the STe SHIFTER, or used a 2-chip solution like the Panther, with that Line buffer "GAME SHIFTER" chip being fed by a display list processor based on MARIA, at least conceptually. (MARIA is already more flexible and powerful than the Amiga's own graphics, at least in concept, but lacks the bandwidth and resolution support to actually do what the Amiga graphics can ... incidentally it actually has some things in common with Atari Inc cancelled 16-bit computers the Sierra or Rainbow or Silver and Gold chips, even though it was developed entirely independently of Atari by the GCC guys) The Tramiels did decide to keep the 7800 and bought the rights to MARIA, so it would've made perfect sense to study and consider its potential beyond the 7800 itself. (as the MARIA chip itself and as a design concept for something new) As for GCC and the Sierra projects see: https://en.wikipedia.org/wiki/General_Computer_Corporation https://en.wikipedia.org/wiki/Atari_Sierra#Description This is opposed to the advanced (and likely expensive) workstation class 68000 based Sierra project (the transition was also a horrible mess helped none by Warner, so it's questionable whether Tramiel's engineers even got a proper look at most of that stuff before documentation and staff started disappearing, then there was yet more work, especially on advanced derivatives of the 8-bit computers none even done at Atari's Sunnyvale branch in CA, but in their New York division). Given the Amiga chipset was seen as the sensible, low-cost alternative to those in-house Atari Projects, it's somewhat understandable they were outside the scope of what Tramiel wanted with the concept behind the ST given even the Amiga chipset was probably more costly than ideal, plus it lacked the highres monitor support of the ST that give it more potential for the "Better than a Mac or PC" business/productivity side of things and was better than the Mac or PC in this regard (or better than baseline standard PC options like MDA or Hercules graphics and vastly cheaper than the new EGA graphics, and even then there's some cases where 640x400 monochrome is more useful than 640x350 in 16 colors, and the ST's high refresh rate, flicker-free, monitor was nicer than the 50 Hz super high persistence MD monitors that have horrible motion blur ... actually for cartoon or comic line drawing, the monochrome monitor was pretty good as well, or monochrome cartoon animation cells, even ... plus pixel shape was closer to square in monochrome mode than in color modes, excluding monitors manually calibrated to display square pixels in 320x200 16 color mode) Anyway, a 16-bit enhancement of MARIA angled at more ST/Amiga beating features for both computer and game console use should've been in development back in '86 or '87 as an alternative to the STe SHIFTER design. Granted the STe design is confusing as Shiraz Shiviji in 1987 or 88 described STe video modes to be more like what ended up being in the TT later on (except 640x240 256 colors rather than 320x480, but the same 640x480 16 color mode was quoted), which was either in error or the TT SHIFTER is what was originally going to be in the STe, but might have been too expensive or delayed. (more likely the latter as the TT SHIFTER needs 64-bit wide DRAM, or at least 2 banks of 32-bits wide, in either case requiring more or wider DRAM chips and more traces on the board. Except even a cut-down 32-bit (or 2x bank 16-bit) version would've allowed 320x240 in 256 colors and 640x240 in 16 colors (or 640x480 in 4 colors), but maybe they were too ambitious, and expected RAM and chip costs to drop more than they did before 1990, and only had that more expensive TT SHIFTER design without a cheaper alternative. I do wonder how cheap a TT chipset based machine could've been with a 16 MHz 68020 in place of the 68030 or a 16 MHz 68000 version for the bottom end of that line. It was on the market before AGA Amigas were and could have been a cheaper Amiga 3000 or mid-range color Macintosh competitor had the price been more like one third to one half of those rather than the TT's roughly $3000 at launch. You'd need the 68030 for out-of-the-box Unix-ready MMU, but a 16 MHz 68030 would do that, too, and the ST MMU itself had a hacky work-around feature to even allow a 68000 to implement protected memory (at least with revisions used in the MEGA ST) to allow UNIX-compatible multi-tasking, albeit slowly, and you could have 68010 or 020 CPUs with optional external MMU socket (and 68000 machines could upgrade to a 68010 along with MMU if desired), but really, the base model wouldn't have been a Unix work station, but just a fast single-task oriented TOS based machine, like most PCs were still at the time. (even if you ran Windows 3.x you mostly used DOS software and while you could do some multi-tasking, any serious work or any games would need to be run with little to nothing else running in the background both for RAM and performance reasons, and plenty of users, probably the majority still ran straight DOS anyway into the early 90s, especially with the graphical DOS shell included in DOS 4.x in 1988 and in the more stable/compatible/popular 5.x) And on that note: the TT SHIFTER would not have been a good basis for a game machine either, probably worse than trying to hack a VGA chip into a console as far as cost vs performance goes. The STe SHIFTER itself was questionable in that role, but the TT SHIFTER's added cost (or cost of implementing with 64-bit bus) was worse. The 256 color mode was still using bitplanes as well, using more chip space to implement and slower for software rendering (at least anything that used all the biplanes) and any sort of 3D or pseudo 3D.
  9. I'll try to go back and reply to some other posts I missed earlier, but I just thought of something else regarding Atari Corp working with UK contractors. Atari Corp was outsourcing to the UK from before the ST itself was completed and while they were still working on OS and launch software and built-in OS utilities. Specifically they outsourced to (and likely collaborated with) Metacomco based in Bristol. In particular they provided the initial version of ST BASIC used on the ST. The same company designed Amiga DOS derived from TRIPOS which they'd gotten in 1984, and were very much involved with Sinclair Research with various projects for the ZX-81, Spectrum, and QL. (Wikipedia only mentions the QL, but my dad, Alan Hamilton, personally worked on software for the 8-bit Sinclair computers at Metacomco back then) The Flare team probably bumped shoulders with Metacomco staff at some point, but I can't comment on anecdotes with them. I do know the Tramiels personally visited them on more than one occassion, and even used some metacomco staff (including Dad) as extras sitting/working at ST systems in one of the early UK TV promotions (not one I've been able to find a recording of). He remembered meeting Jack, Leonard, and Gary, but couldn't remember anything about Sam, or at least not by name. Metacomco had an office in Pacific Grove, CA (Near Monterey, where Dad lived at the time) and got a job with them at some point in 1984, and ended up moving to Bristol to work with them full time by early 1985. I don't think Dad was especially involved with ST software while there, in terms of actual programming, but had some overlapping involvement in PR stuff and was kind of a utility player as far as overlapping software engineering and technician skills (he was a software engineer by trade, but ended up doing lots of technician work and hardware tweaking, at least at high level stuff: ie actual assembly or configuration of chips on the board or boards assembled into a system case, both at home and unofficial stuff at work ... catching mistakes some of the hardware engineers made: I remember one particular story where pre-production hardware had severe overheating problems, and he went to look at the prototype systems and realized the board layout and ventilation had the airflow going in the opposite direction over the board vs the production model ... which on top of not working in the production configuration also implied the margins for cooling were rather poor) I think Dad was mostly involved with the Z80 based stuff early on, as far as actual coding he did, and ended up doing some work with their LISP compiler or something to do with that later on, I'm not sure on the specifics. He was involved with their work on the Amiga, not directly in programming the OS, but definitely with testing, quality assurance stuff, and tech support, and represented Metacomco at some conventions where Commodore was demonstrating the Amiga at several points. (he had one story where there was an Amiga demo set up and Dad suggested installing various programs to a hard drive for better performance, and the Commodore guys got upset ... or if it wasn't Commodore, it was some software publishters, apparently upset over giving people ideas on how to pirate software or something, or maybe even just an issue over promoting hard drive support, but it was rather silly overall in any case, given this was all standard procedure on XT class PCs of the time) He was probably involved in some of their IBM PC compatible stuff as well. Again not sure on the specifics, and Dad passed away back in 2019 following a bunch of health complications, so I'm going on what we talked about in previous years. He was at Metacomco into the late 1980s, but I'm not sure exactly when he left. I know he visited Bristol again in 1990 along with Mom, but I think that was a vacation where he met up with friends and may not have been with Metacomco anymore by then. (he was at Mizar/Integrated Solutions in most of the early 90s doing a lot of stuff with VME bus minicomputer sized rack servers or mainframes or something like that, 68020 and 030 based machines, one we had set up in our garage at one point ... had some of the old VME bus cards for such a system until they went to ewaste in the early 2000s, sadly ... though I have one of the 4MB RAM boards still, and did consulting work with Pencom after that, at least I think in that order) At some point among all that he was working with plenty of x86 based systems, especially 32-bit stuff. (also made comments on 32-bit code on a fast 386 based system ran about the same as it did on the 486 workstations they had at work, and people at the company were surprised when his home-built PC was running their code as fast as the vastly more expensive workstations, where the performance difference was much more obvious for 486 systems running legacy 16-bit code or IA-32 code that made heavy use of 16-bit operations) I know he went from a TRS-80 Model I, to a Model 16 (or a Model II upgraded to Model 16 configuration with 6 MHz 68000 running Xenix) that he used for home business stuff for years (I've still got copies of Turbo Tax on 8 inch floppies ... still have that Model 16 as well), and then home-built PC compatibles (outside of company-provided proprietary machines he had at home for a while). We never had Atari or Commodore computers at home when I was a kid. So I actually grew up with Nintendo and PC games mostly, though Mom had a VCS boxed up in a closet (missing power supply and RF adapter) that I got hooked up around 1999 or 2000, but that's another story. Dad definitely remembered the Jaguar, though I don't remember seeing ads for it (I was certainly old enough to remember in '93-95, just never saw it on TV or on display at Fry's or such when Dad used to take me there). He mentioned something about that way back when I first got into retro atari stuff in the early 2000s, I think something about it being neat/interesting but unfortunate, or a mess, or something like that as far as it turned out on the market. We never owned one, but I think he tried one out at some point, or had a friend who had one. He was also working in Sunnyvale somewhere near Atari HQ around the time it was being developed (not sure if that was Mizar or something to do with Pencom, probably Mizar). Weirdstuff Warehouse was a few streets over from Atari HQ, too, and we used to visit there. (I know we visited some other wholesale/used electronics warehouses in the area more frequently than Weirdstuff, but don't remember the names ... other than a guy named Jay operating the one we went to most often)
  10. I'm not sure about Sam on the ST, but I generally treat him as an unreliable narrator both on the business and technical side of things. On Leonard in a recent interview on youtube, he said pretty straight up that the ST was slower to get started than they'd expected and followed up by very clearly stating the IBM PC was the main reason for this, with the ST having launched just as the PC started getting really big and becoming more of a commodity product. (he didn't say explicitly, but also the issue of PC clones becoming common and home/botique built or upgraded machines starting to appear from wholesale Turbo XT clone motherboards becoming common at lower cost and better performance than anything IBM or pre-built 3rd parties offered with the possible exception of the Tandy 1000) That said, he didn't make out that it failed to make much of an impact, just that the massive potential for success was curtailed heavily by the success of the IBM compatible PC market of the time. He went on to mention the ST took off more quickly in Europe, which is pretty evident and obvious. That's also the market where they competed much more directly with the Amiga, or rather where the Amiga 500 (and probably the 2000 in a lot of business/professional applications, especially given Commodore Germany pushed that model hard and the ST was doing especially well for business use in Germany). From the records I've come across for both the US and European markets, the ST did significantly better market share wise than the Amiga prior to the A500's release in 1987 or even through 1987, and the ST's market presence and software base almost certainly laid the groundwork for the Amiga's success in a way that it wouldn't have had it only been the Amiga 1000 on the market prior to that. (with '88 being when they fell behind, likely in part due to supply and pricing issues partially related to the DRAM shortage, I know Atari had to raise prices of the 520 ST that year while Commodore was dropping the Amiga so they both hit around $400 or 400 pounds in the UK where it had been around 300 for the 520STF vs 500 for the A500 at the beginning of the year ... they might have been able to set the 1040STF as the baseline standard if not for the DRAM shortage and supply issues, and with yield problems with new 1Mbit DRAMs whilst having scaled back production of the older 256kbit DRAMs in anticipation of that, so Atari's use of 256kbit chips in 520/1040 ST models and 1Mbit chips in MEGA ST models were both hurt for different reasons) The ST had the bonus of being more IBM-compatible-a-like than the Amiga or Macintosh both in hardware and OS and disk format (with CP/M or DOS-like syntax and compatible file system) with a very good feature set compromise between a PC-AT or XT and 512k Mac in most respects, except without a Tandy 1000 or Amiga 2000 style big-box expandable option (let alone standard ISA slots) or cheap MFM/RLL hard drive options without a SCSI bridge adapter. Shame the ST's cartridge slot wasn't like the 130XE's ECI port with at least basic expansion potential and direct connection to the 68k bus (also like the Amiga 500 side expansion connector). SCSI drives were nice, but not cheap. OTOH external 5.25" floppy drives did become available, making portability of files from ST to PC even more flexible at a time when 3.5" drives on PC were less universal. From what I've seen and read, Atari was never super keen on promoting the ST as being PC-like or trying to expand further in that direction in terms of software support or hardware features, but that really seemed like its strongest point as a flexible utility machine with overlapping software compatibility and portability to both PC and Mac with vastly better raw performance and features to anything else in its price range or well above it. (with the exception of PC or Apple II style expandability) see the interview: [quote] I've always viewed the Panther as Atari trying to re-enter tye console market, with a machine that technically could match or better what Sega were putting out with the MD and Mega CD, so more colours and Nintendo with the SNES, so better sprite handling abilities, which themselves have been mocked by the likes of Rob Nicholson and Jeff Minter, who actually worked on the development hardware, on projects Leonard Tramiel himself seemed unaware of.. [/quote] From the point of view of Atari Corp, they'd re-entered the console market (and even the video game publishing market) in 1986 when they formally released the 7800 and 2600 Jr and actually started up new production rather than selling off surplus Atari Inc inventory. It's better to think of Atari Corp as a re-named Tramel Technology Ltd. more than anything else, a separate company that absorbed Atari Inc's liquidated assets. Had Atari Inc actually been sold off whole sale, things would've been drastically different. (granted, had James Morgan and Atari Inc staff even been aware that liquidation was pending rather than just a potential sale of the company, they'd have certainly handled things better and much smoother than what happened ... the liquidation of Atari Inc was a complete mess on Warner's part, and that confusion did a great deal to hurt what potential there was for the new Atari Corp from making the most of potential new hires of former Atari Inc staff or Atari Inc projects) Then you had the bigger mess of Atari Inc not owning the 7800, but it being a Warner contract with the GCC guys and that led to more legal issues and negotiations until Tramiel finally just purchased the rights to the hardware (and paid off GCC's contract) outright. Albeit, from what Curt and Marty mentioned years ago, with the pace things were moving at Atari Corp and the resouces they had, a full roll out of the 7800 wasn't going to be ready in 1984 or even 1985 as it was, and it took the efforts of Mike Katz throughout 1985 to actually set up the new Atari Corp entertainment division. Katz also noted the very strong sales in the 1985 Christmas season to the extent they sold off all existing inventory and would've sold more had they had the production capacity ramped up. The 7800 launched at the same time as the NES and was quickly outpaced by its 1986 success, though maintained strong second place market share for much of the 1986-1988 period (I'll have to dig up figures at some point), but with the limited amount of investment Jack Tramiel was willing to put into the Entertainment division, it would've been up to 3rd parties to come up with anything really big on the 7800 and make the sorts of investments for enhancement chips (mappers and sound expansion) that the Famicom saw in Japan or the NES in the US (albeit without the sound due to change in cartridge pinout), and Nintendo's market lead combined with predatory licensing agreements prevented that. The only reason the Master System had such decent software was Sega's massive internal software development efforts even though they did relatively weakly in the Japanese Market and far behind Atari's figures in the US. They did OK in the UK as far as consoles went, but that's inflated by excluding home computers (with the ZX Spectrum and C64 being the real standard 8-bit game consoles in that market prior to the 16-bit consoles and home computers where I believe the Mega Drive eventually ended up outselling the ST and Amiga in that market segment of the UK). Atari's very late entry into the UK market and Nintendo's predatory (eventually illegal) licensing restrictions in the US for both domestic and foreign publishers trying to release games there screwed the 7800's potential for growth. It sold well enough that there easily could've been better 3rd party games with heavier use of bank switching, sound chips (even just super cheap off the shelf ones like SN76489s or slightly less cheap low pin count versions of the AY8910), and expanded RAM, and I'd expect Jack Tramiel especially would've preferred if they stayed out of the video game software development and publishing business entirely, but made money from 3rd parties licensing royalties instead. Atari ended up having to put more money into getting more games out on the 7800 since no one else would, or rely on computer game publishers who weren't locked into agreements with Nintendo. Even then, the 7800's hardware sales were drying up by 1990 and it was pretty much a legacy platform by 1991 (ie installed user base, but few to no new buyers), and it hadn't been on the market in the UK for nearly as long, but also didn't sell nearly as well, probably in part for having to compete with the ZX Spectrum, being even cheaper and having much cheaper software, and the C64 having not being as cheap, but still having much cheaper games (and more comparable graphics, but better sound). Maybe if the 7800 had launched with a RAM+tape drive expansion module it would've taken off, but even then it also would've been needed to be in stock sooner. Wikipedia says it made it to PAL regions in 1987, but I recall a lot of anecdotes and some print articles that indicated wider availability in the UK didn't really happen until about 1989 or 1990. Actually, a cartridge with a 32kB SRAM and POKEY chip with SIO port added for standard Atari 8-bit tape drives would work nicely ... though a simple bit-banging software driven tape interface would be cheaper and could use generic tape drives, the Atari drives were still reasonably inexpensive ... and better for Atari profits, and had the potential for doing cool things like synchronized tape soundtracks with game code and/or loading. (not ideal for in-game music, but would've been really cool to use for intro/title screens or cutscenes with linear organization on tape) In any case, they'd made a big enough of a market presence in the US during the years Katz was there (1985-1989) that the Panther would've been a cntinuation of that had it made it to market on time, or had they come out with ANYTHING remotely marketable in 1989-1991 ... even 1992 if it was something better than the panther as-cancelled and cheaper than the Jaguar while also a reasonable value at that. (the Jag was neither cheap nor compelling in its test market time in 1993 but also entered just in time to hit a recession/slump in the market, which in fact was the main drive behind Sega's release of the 32x, to try and fight the slump ... though that's another story in itself). They also tried with the EXGS, which I believe Leonard Tramiel said was their attempt to "Do the 5200 right" except from the print ads I've seen, it ended up being priced higher than the 65XE, with $200 figures in some cases when the 65XE was just $95~99 And the 7800 was under $80. This was probably including the light gun and keyboard, but exen so a 65 XE had a keyboard built in and the light gun was a gimmick at best. I'd argue a simple RAM+POKEY expansion cart for the 7800 would've been better (and could've been built-in to upgraded models for still under $100) ... hell, just use 28 of the 32kB in the expansion unit and re-map 4kB of the SRAM chip to replace the 2 2kB SRAMs in the 7800 to save board space and cost, probably fitting onto the same motherboard. (if you couldn't quite fit POKEY, you could save a lot of space by removing the RF modulator and using an Atari monitor port instead with optional external modulator/switch box combo like Sega did with the SMS and MD in Japan and with the MD2 in the US and UK ... also Commodore did with the VIC-20, though I think that RF modulator was both bulkier and a bit crap ... but by the late 1980s, composite video was becoming common on tons of TVs with hi-end ones adding S-Video in 1987, and even if your TV lacked it, VCRs of the time were including it and could provide the RF modulation to an older TV, so Atari could've offered higher quality composite video cables and omitted RF modulators in some bundles, while saving costs so long as they marketed it clearly enough ... and probably had the RF modulator bundled in regions most likely to have RF only; not doing this with the Jaguar in 1994 was far more bizarre, and including an RF modulator inside the system was a strange move for that time when they wasted board space and manufacturing cost on that vs a cheaper A/V port and optional external modulator ... hell, I'd argue it was wasteful/overkill to stick an RRF modulator on the STe motherboard either rather than just including composite video + composite sync for wider compatibility of RGB monitors and SCART-capable TVs in the UK/Europe ... though S-Video would've been nice and relatively cheap to implement with common RGB video encoders of the time: you can also omit a dedicated composite video line in favor of merging Chroma and Luma in ) Actually, if they could get it out the door by 1990 (better 1989 in parallel with the STE's release) a console based on the STe hardware stripped of as much as possible and not ST-compatible (unlike the XEGS, but maybe with a general purpose expansion port allowing computer expandability) would've been a lot better than nothing and might have had the significant side-effect of getting more developers to release STe-specific games and with those more STe-specific graphics/art/music programs. Leonard Tramiel mentioned an ST console would've been too expensive and an STe one too and/or not been good enough, but I'm not sure that's in the concept of: take the STE ... remove everything but the GST SHIFTER, GST MCU, 68000, and blitter (which was later integrated into the MCU chip anyway), plus minimal I/O port handling logic for the 2 enhannced joystick ports. (I think those have dedicated hex decoders or something and don't make use of the keyboard controller at all). They might have still needed to include the Motorola MFP chip, though even the panther dev units had that onboard. (you could also repurpose its timers and I/O lines no longer needed for computer stuff, but then that would also complicate expansion into a full ST and add things for programmers to exploit that couldn't be brought over to the STe) The LMC1992 stereo mixing chip could be included if it was cheap enough, but it also doesn't get you that much as configured in the STe (you'd really need the 2 DMA channels to be mapped to separate left and right stereo channels, using 4 of the chips inputs to allow smooth panning of each channel or even setting both to center/mono but at different voume levels ... and even doing 16-bit mono sound by setting one to 1/256 the volume of the other, which that chip does actually support ... it's in dB steps and not linear in the datasheet, but at one point I checked and it's either exactly 256 or close enough to still approximate 16-bit linear sound better than either the Amiga or the Sound Blaster 16 can: the latter only rated for the upper 14 bits to be valid, at least in early models, ie not Vibra 16s or such) Then you just have to force developers to so software mixed music and sound effects on those sound channels, or the simplest solution: just multiplex channels by interleaving samples (2 50 kHz channels become 4 25 kHz, 6 16.6 kHz, or 8 12.5 kHz channels, and the latter case is closer to what the 8 MHz 68000 could handle inside a game if you're doing a lot of note-scaling on the fly and not pre-scaling everything in RAM, even then, with 512kB you can afford to do a lot of pre-scaling and also have some samples directly in ROM, ie anything at a fixed pitch like percussion; and without scaling it's just a matter of copying over and interleaving the samples with relatively little CPU overhead, or potentially even using the blitter to help with that; plus cases where you can store short looping sections of music in RAM and then just mix in sound effects, or short segments of music stitched together into longer, more complex tracks without needing to be one long stream) Then add a boot ROM that includes the copy protection signature check for catridge based software (ie the only softare on an unexpanded unit) and there you go. A comparatively tiny motherboard with little more than a CPU and Atari's custom chips (less the DMA chip) and not using any features the STe didn't also have, but FORCING all of those features to be used. The final question is 512kB or less? With ROM you can make do with less and not lose much since with hardware scrolling and the blitter you don't need to rely on pre-shifted data as much. 128 kB would be the minimum practical (via 4 64kx4-bit DRAMs) and would leave one bank unpopulated to more easily upgrade. But if they used SIMMs or SIPPs (same pinout and board mounting positions), then 512kB would be the minimum anyway via a pair of 256kx8-bit modules. (then the upgrade would be to 1040STE standard to "full computer") Hell, Atari could've avoided releasing the 520STe at all in favor of a 512kB game console variant that was MUCH cheaper than the Amiga 500 (though consdierably more expensive to expand to 1040ST standard, even if you got a cheap 3rd party RAM expansion, and 256kx8-bit SIMMs became dirt cheap quickly, and got even cheaper when everything else was getting more expensive in 1993/1994 due to a new DRAM shotrage, they got so cheap they nearly matched bulk DRAM prices but at a consumer level: it was because there was a surplus of new and used 256k modules and a high demand for 1MB modules needed for 4 MB or greater in typical PC motherboards of the time ... 8 SIMM slot boards could get you 2 MB very cheaply at the time, but unless you used SIMM saver style multi-SIMM adapters, you couldn't get more than that... the latter was also good for adapting 30-pin 8-bit SIMMs to 72-pin 32-bit SIMMs, my Dad actually used those in the first home/family PC he built for me back around 1993 or 1994 ... also a used, paper white industrial monochrome VGA monitor, so 3D glasses effects in games didn't work, but we eventually upgraded that ... after installing a CD-ROM drive, yay for black and white multimedia PC) Come to think of it, the SIMM savers were probably used in a later upgrade and not the initial 1993/1994 build. (I still have that baby AT case with the K6-2/550 build from around 2000, but don't have that original monitor ... I think I still have the color SVGA monitor that replaced it, though; that B&W one would've been fun to try out with an ST in monochrome mode, though) And yes, the STe was underpowered comapred to what Atari should've come out with by that time (and for a computer, not a games machine, a faster CPU should've been higher priority ... ie a vanilla 1040STF with 10/12/16 MHz 68000 + blitter would've been more appealing and DMA sound could've been added without a new SHIFTER ... a cheap IDE interface or general purpose ISA-compatible expansion port would've been way more useful too, or just a bare 68000 bus expansion port to implement the same externally). But given whatever chain of events led to the STe being as it was, they'd brought it to market and a game console based on similar specs (sans all the computery bits) and while obviously not better than an Amiga, it was still notably more capable at games than the ST and without slaving most of its RAM for pre-shifted graphics and workarounds for software scrolling and faster sprite rendering. (you could still do a lot of the latter, but make much better use of that RAM doing such). 512kB ROMs would be common on the lower end even from the start with 256kB being limited to really cheap budget titles and probably ST shovelware ports that originally fit on single 360 kB floppies (except they'd need new sound/music design and could fairly easily implement hardware scrolling if not new sprite blitting routines and would have to use the enhanced joystick ports and remap any keys to the keypads on the controllers: we're assuming the Panther/Falcon/Jaguar gamepads were used here). But 512kB ROM games with 512kB RAM could do some things you'd need a 1040STe to do, albeit without the flexibility of multi-disk floppy based games or games that made use of a full 720kB floppy, especially with compression (you could also compress into ROM, but you lose the ability to use it as RAM-speed data for the CPU/blitter, albeit streaming compressed data from ROM could be faster than streaming from floppy, and compression on the latter was sometimes used to improve data rate as much as it was to save space) In any case, since the contemporary Amiga 500 wasn't touting 1MB of RAM and indeed, was limited to 512kB of ChipRAM where the ST blitter could easily work in the full 24-bit address space (4MB limited by MMU alone, but even then much larger than OCS or ECS Amiga) you'd thus have cartridge based games capable of animation and/or higher quality sound effects, or other features the Amiga couldn't manage, and RAM to sacrefice for faking dual playfields via animation while maintaining full 16 colors per scanline vs 7+8 colors in dual playfield + lots of CPU and blitter slowdown on the Amiga. (you could still opt to drop to fewer bitplanes in portions of the background that don't need as much color to speed things up and use less RAM, and do that along 16 pixel, word-alligned columns to maximize speed) It's the same method used for PC engine games that fake added parallax layers and some Mega Drive games like, Sonic 3 that does it as well. (some NES games do that too, and obviously a number of Atari ST games and demos) Now, one area ROM could net you more performance over ST RAM is you could set it up to only be on the 68k bus, so it acts like FastRAM on the amiga, except the ST blitter works on the 68k bus already, so it gets the boost as well. Thus any code and data fetched from ROM avoids the 4-cycle allignment limitation and cheap 250 ns ROM (standard by that time) would get you easy 0ws access with an 8 MHz 68000. You hypothetically could even interleave 68k and blitter cycles in ROM, but this would need more hardware to do (basically a ROM-specific MMU sort of chip with a bus latch) and not in the "as dirt cheap as possible" sort of engineering option as far as repurposing the STe hardware as-is from 1989. Additionally, ROM could even help with 3D games, both for the above reasons (heavy use of multi-bit shift operation for accelerated multiple/division routines don't allign with the 4-cycle memory period the MMU provides, so true 0ws operation would It also breaks the "no DRAM in consoles" decree from Jack Tramiel, but then it's got totally different market potential as expandable to full STe standard and a tool to enhance software support for the STe's feature set, perhaps especially so in the US market where the ST itself didn't have the sort of success as a dual-purpose game console and home computer as it did in the UK and Europe, though especially the UK, as far as games went (ie it wasn't as much a business machine as some of its success in mainland Europe made it to be ... granded Jack Tramiel's vision was less of a business machine and more a general purpose personal/home/business/educational machine). Albeit if they really wanted to push that no-DRAM angle, you could technically use PSRAM in place of DRAM in the ST, but there would be little point as it still wouldn't be as cheap unless you could do away with the MMU entirely in favor of a much simpler bus controller chip (that'd be nice if it also added more programmable screen size/overscan for the SHIFTER ... you could also omit refresh logic entirely that way as you could extend the vertical display as far as needed to ensure refresh via pure video cycles alone). You could thus leave the MMU to be included in an expansion module with all the other hardware ... except I think DMA sound partially relies on the MMU, too, so you'd need that in the replacement chip too. (if it was significantly cheaper and available at the same time and/or could include the blitter on-chip from the get-go, that would make sense, but otherwise it really doesn't) OTOH including PSRAM on the ST itself as a 0ws CPU scratchpad "Fast RAM" optionally accessible to the blitter would be something else, especially if it switched to a faster speed while in that RAM. (not a cache, a cache is much more complex and needs caching logic and much faster SRAM to actually fill/update and read through or write back the cache data, depending on the routine used: ie what the MEGA STe has took a lot more engineering work where adding PSRAM/SRAM fastRAM to the ST would be as simple or simpler than DIY RAM upgrades already were ... which makes me wonder why no one did it back then, I mean the Atari 8-bit got early, proprietary RAM expansions that got software support, soo .... actually you'd just have to write a TOS routine for it and any TOS based software could be accelerated automatically, games would be another matter, though) Actually, you could probably just have the CPU clock speed "Turbo Mode" gate toggled without using bank switching, but simply mapping the FastRAM outside the 4MB range the MMU can support (and outside of the chipset I/O area, which is all inside the MMU address range anyway, I think), so when the upper two address lines on the 68k go active or wait no, I just checked the ST memory map and there's a bunch of I/O and register stuff way up at the top of the 24-bit address space so that's not what you want. What you'd want is a specific address above the 4MB MMU limit but within the lower 10 MB range already reserved for RAM expansion but below the MEGA ST VME bus range (though Falcon RAM expansions map into that range to, up to roughly 14 MB), so it would be a little more complicated, but still could be done automatically fairly simply without having to have the CPU write to a memory mapped I/O port for bank switching. You also probably wouldn't need to be able to disable fast RAM as it would only ever be enabled in software making accesses to that specific address range. ST memory map is here: http://cd.textfiles.com/ataricompendium/BOOK/PDF/APPENDB.PDF Even without a faster CPU clock you'd still get a modest speed boost for certain types of code, just much moreso when combined with a clock speed increase. (and as far as DIY upgrades went ... without replacing the CPU or TOS ROMs, you could probably get away with 10 MHz, but you'd need to synthesize it from the system clock to remain synchronous with the MMU and allign along every 5 CPU clock ticks, so that would've been needed in the conversion kit; and the question; at some point it would probably become cheaper to have an upgrade that just used 120 ns SRAM or PSRAM and required a 16 MHz 68000 driven off the MMU clock and require software to bank switch the CPU into fastRAM and simultaneously gate over the 8 MHz clock to the 16 MHz one) Hmm, Atari could've done that in an STe based console: provide 64kB of 120 ns PSRAM (or SRAM) and a simple mechanism that swiched the CPU from 8 to 16 MHz while accessing it. You could then have the blitter in HOG mode while working in ROM and RAM while the CPU can still do fast routines in its private RAM. Plus you still have the marketing label of 16 MHz even if it ends up being realistically weaker than competing 16-bit game consoles in most respects, though probably better in 3D, especially simple filled/shaded polygons (I wonder if the blitter's halftone mode would allow dithered polygon fills for shading). For texture mapped 3D and scaled 2D objects you'd have more trade-offs compared to the Mega Drive's chunky pixels, not that it actually got many wolfenstein-a-like games. (plus Catacomb 3D and its kin working in EGA graphics had to deal with bitplanes anyway AND work within 512 or 640 kB of RAM, generally 512kB minimum free, so 640kB with DOS loaded or a boot disk required for a 512kB system, though I think Wolf3D itself had added features with full 640k RAM and more with EMS memory available) The 16 MHz CPU + small scratchpad local RAM (64 kB is a comfortable minimum using 32kx8-bit SRAMs or PSRAMs) could've been introduced as an afterthought on the console, but then retroactively added to an STe+ of sorts or technically ... An 1110 STe+ going with Atari's naming schemes based on truncated totals of memory in decimal byte counts. (1024+64 kB = 1088 kB = 1,114,112 bytes) Or you could even drop back to the 512kB base DRAM, especially with the shortage dragging on into 1989 (starting to fall away in 1990 then completely in 1991) but adding SRAM or PSRAM and a faster 68000 at relatively low cost using components that were not in any sort of shortage for a 580 or 590 ST+ or STe+ or something. (honestly during 1988 with the DRAM shortage forcing 520 STF prices to match Amiga 500s, fast 68000 + SRAM scratchpads and possibly fastROM TOS should've been a quick and simple fix that could appear before the STe itself would in 1989 ... that and actually populating STF/STFM motherboards with blitters, which should've been even faster/easier to do than messing with faster CPU speed and RAM modifications) This also would've been a very inexpensive way to add a 12 or 16 MHz (ie bottom-end) 68020 to the ST while having a much more meaningful performance gain than if it was just stuck there on the slow MMU bus with small gains from working in cache and faster internal execution times. (in this case use 128kB as a minimum for full 32-bit width ... albeit keeping the 68020 in 16-bit bus mode at all times would be simpler, too ... and technically would still make it more of an ST than a TT, just more T and less S than a 68000 machine ) But for an STe console, a 68020 would be too expensive and lack the flexibility of the many vendors of 68000s (both CMOS and NMOS), most of which offered 16 MHz versions which were falling into the dirt-cheap embedded systems market segment by 1989 where slower 68000 models had already fallen. ( [quote]Maybe because they were UK based projects??[/quote] Had Atari not just hired one of the lead engineers of that Slipstream project, that might make sense, but the fact Martin Brennan was brought on to actually implement the chip design of the Panther (the Object processor portion) should have immediately opened the doors for potential there, all the more so when John Mathieson was brought in too for the Flare II Jaguar project. Mind you, in parallel with that they were still working on further developments of the base slipstream hardware for follow-on plans with Konix's Wyn Holloway. The intermediate developments are still lost, but by 1993 a CD-ROM based version of the slipstream using 32-bit wide DRAM (or optionally 16-bit with reduced video modes), added 16bpp rendering (not CRY, but 565 RGB pixels) with gouraud shading and texture mapping functions in the blitter (similar to the Jaguar's scaling and rotation function) and a planned 25 MHz clock speed with TV or VGA compatible resolution modes. It was still much simpler (and probably cheaper) than the Jaguar's chipset and all inside one ASIC, with a 386SX planned as the CPU running at 1/2 the ASIC speed (ie 12.5 MHz). No object processor and a less feature rich blitter working on a 16-bit bus (albeit texture mapping wouldn't be any slower as the Jaguar Blitter only fetches one texel at a time and writes one pixel at a time, plus I think maxes out at 1 pixel per 5 clock cycles, even if reading from GPU SRAM, at least based on some tests KSKUNK did years ago: it's also limited to 11 cycles per pixel when reading and writing entirely in DRAM vs 4 16-bit pixels per 2 clock cycles for gouraud shaded fill operations) See: http://www.konixmultisystem.co.uk/index.php?id=downloads "Slipstream Rev4 Reference Guide v3.3 5th October 1993" Now it's possible development of the new Slistream derivative was delayed until after the Jaguar's production silicon came back, but it would also make sense if they were working on that in between contract work with Atari, especially while waiting for test silicon to come back in mid/late 1991 into early 1992 (at which point the bugs in the first version of TOM came back and they reworked it into the version that was released in 1993 with still prominent, but not as bad bugs). So aside from the 1989 production version of the slipstream, Atari and Flare had plenty of opportunity to consider a more foolproof and simpler alternative (or predecessor to) the Jaguar without using any part of the Panther at all, or potential to re-use much of the existing Panther Object Processor along with improved versions of the Slipstream DSP and Blitter along with a 32-bit wide DRAM controller fast enough for the Panther to work with. (or at least fast enough to use for max bandwidth object data and use SRAM for the list) That and/or adding page-mode bursts at least for fetching object lists if not data. And if not a DRAM controller, they at least could've used 32-bit wide PSRAM and just basic refresh logic as was in the Slipstream already (and even that should only be needed during vblank, so long as video is enabled and the object lists and/or data are organized to cross all pages/rows of PSRAM ... in which case you'd organize it like in the Atari ST with each consecutive word being on a new row/page so for 256 rows, you just need to scan through a line of 256 words every 4 ms, typical of 32kx8-bit PSRAMs or 32kx16-bit ones too; with a linear framebuffer like the Slipstream uses, this is simple to achieve, but with a more flexible object processor like the Panther, you'd need to pay a bit more attention to how you organized list data and object data to make sure that all works without refresh ... granted it becomes super simple as soon as you use at least one full background framebuffer as one of the objects) The above scheme of refresh via video access and that weird word organization along DRAM rows works fine as long as you don't need to use page mode (or you want static column mode instead: which some PSRAMs support and some DRAMs did back then, but page-mode was much more common: where you stay in one row and work through different column addresses in the DRAM array). So they could've implemented a system in PSRAM while fully intending to switch to DRAM when possible (and when production volumes merited such a shift in production). Had anyone involved looked at the specs for the few examples of available 64kx16-bit DRAMs (ie all the ones I can find documentation for or catalog listings for from, with 1991 being the earliest) and also knew the DRAM timing used on the ST and STe MMU, they'd have realized the timing parameters were just about ideal for the 80 ns DRAM when using a 32 MHz MMU clock rate for achieving 125 ns read/write cycle times. Shiraz had left Atari by that point, but he was hardly the only one who knew the ST DRAM timing, even if the Flare guys didn't ever look into that. Or for that matter, if they knew the timing used in the Amiga, which is identical in terms of the critical RAS (Row Address Strobe) and RP (precharge) times, just that the Agnus chip in the Amiga used a 28.636/28.375 MHz clock for ~280 ns cycle times and the ST MMU uses a 2-phase 16 MHz clock (or works on both the high and low cycles for effective 32 MHz inside the MMU) for ~250 ns cycle times. (The Amiga might actually do something fancier since it also has a second clock phase shifted 90 degress off the main one and might actually achieve ~104.8 ns RP and 157.1 ns RAS, the latter being effectively 4.5 clock ticks and would be done by using one of the clock phases to start the pulse and the 90 degree shifted one to end it ... and doing it this way would optionally get you extended precharge time and have slightly more flexible address and data latching time between the various DMA channels and CPU, even if the actual access slots are all effectively 280/560 ns, and as far as the CPU sees things, bus cycles are rounded to multiples of 4 CPU clock ticks just like the ST) I'm not actually sure what the DRAM timing used was on the Slipstream, but it might've been something funky making use of the 33% duty cycle clock native to the 8086 (and why they used 1/3 of the clock crystal rate like XT motherboards use, in this case 17.897725 NTSC or 17.7345 PAL). DRAM cycle times are 3 DSP/blitter clocks (or 251~254 ns). And a full 11.9318 MHz clock pulse is 83.81 ns which could be used as a precharge period directly with 167.6 ns RAS, but that's quite a mismatch for available DRAM specs and would cut precharge short for most things slower than 100 ns (and even many 100 ns DRAMs, especially NMOS ones). OTOH if they were using a 33/67% duty cycle split on the clock and uses both phases for DRAM control, that means they could effectively use 1.333x for RP and 1.667x for RAS, which is 111.75 ns for RP and 139.68 ns for RAS, both of which are very nice, conservative figures for bog standard 120 ns DRAM and would probably work fine for most 150 ns DRAM out there (it might even be more compatible than the ST's timing, since RP is extended where the ST cuts that to ~93 ns and instead extends RAS to ~156 ns where cutting RAS short is often safer, but wasn't an option on the ST with the clock rate used for the MMU ... though a hypothetical 16 MHz with a 33% duty cycle 2-phase clock actually would've allowed a nicer 145.8 ns RAS and 104.2 ns RP) Aside from that, the more common DRAM timing the Jaguar used (with equal RAS and precharge times rather than 5:3 ratio the Amiga, which for the needs of the Panther would also work fine with much more widely available and typical 100 and 80 ns DRAMs with RP at 55~70 ns for 80 ns parts and RP at 80~90 ns for 100 ns parts (usually 90 ns for NMOS) and for typical cases you can just round that to 80+80 ns RAS+RP and 100+100 ns RP ro close to it and be fine. (say using a 40 MHz DRAM controller clock rate and 75+75 ns for 80 ns rated DRAM with 150 ns cycle time, which is also typical of 80 ns DRAM, which would probably be typical of 1ws or at least 2-tick access 3-tick cycle time 386SX-20 motherboards or 0ws for a 68020 since those use 2 tick access and 3 tick bus cycle times vs 2-tick access/cycle for 286, 386, and 68030; 68000/010 is 2 tick access 4 tick cycle, 8088/8086/186/V20/V30 are all 3 tick access 4 tick cycle) However, for the cycle times the 50 precharge requirement along with 70 ns RAS and 130 ns rated cycle times, so very close to the ~125 ns needed for the 32 MHz Panther, but you'd be further out of spec if you just used 60 ns RAS and 60 ns RP. You could use ST/Amiga style timing at 32 MHz for ~78 ns RAS and ~47 ns RP (which people have successfully done for 200% overclock mods on STs, though not STes) Or you could get nice figures if you used a DRAM controller at 48 MHz with a 2-phase clock, or at 24 MHz with a 45 degree shifted phase rather than 90 degree. Ie at 24 MHz you could use 1.75 clock ticks for RAS and 1.25 ticks for RP for effectively ~72.4 ns RAS and ~52.1 ns RP. This doesn't help with ideal page-mode cycle times, but for a direct conversion of the Panther or 1989 slipstread (with simple SRAM or PSRAM cycle times and no burst reads) that's all you want. [quote]Sam's memo saying, just say technical issues, is typical Tramiel playing the press, they loved to be in the spotlight, annoucing new hardware, that'd never arrive or arrive late and didn't deliver half of what was promised, they'd always have a throw away excuse ready to explain why it wasn't now appearing.[/quote] Which is why I'm not sure what to make of that without further context, like asking Leonard Tramiel himself about it. An internal memo is still different from public statements, but at the same time that wording really makes it unclear how to take it, or if the "anyone" who might ask refers to developers, atari staff (incuding Atari UK/Europe), or the general public, or both. It does at least fit Leonard's story about the hardware not actually working, but it also obviously worked well enough that at least a handful of dev units went out. [quote]With games known to be in development when Panther was canned or at the very least, approved for conversion, more colorful versions of Shadow Of The Beast, Tiertex Strider 2,an unknown RPG, Jeff Minter's take on Star Raiders.. That's not going to cut any mustard next to the libraries and more importantly, third party support, Sega and Nintendo could call apon for their platforms. [/quote] Yes, the initial lineup wasn't that impressive, but relative to the lackluster early (and few) Jaguar releases, it's far less disappointing. Bone stock Amiga Shadow of the Beast would already be a good example of what average use of the Panther's hardware and color capabilities could be, just throw in some Sega System 16 arcade Alterted Beast style zooming sprites into the foreground when killed (instead of just falling off screen on the Amiga or poofing into smoke in some ports) and you've also shown off at least the basics of the sprite scaling function. Without broader 3rd party interest, it probably would've had a niche as a budget console with some unique games and a lot of cheap shovelware, but probably a larger library than the 7800 ended up with. It at least should've had a better chance to get 3rd party support than the 7800 did due to the antitrust cases against Nitnendo establishing a precident against certain types of predatory, anti-competitive licensing. (ie if companies wanted exclusive developers they basically had to buy them up outright or continually pay for exclusive rights to given games and publish them themselves ... and even then they'd have to think that's in their best interest for profits, where sometimes a multi-platform business model is best, even if you're a console maker: the profits are from software sales, and if you can sell to the competition's platform and make more money, then by all means do so ... or for that matter make games for multiple of your own platforms, like Microsoft should have but idiotically failed to do with Windows game publishing after the Xbox released, and even delayed the PC release of Halo among a few other games just to bolster Xbox interest: having a console should've expanded their publication base, not diminished it ... they were doing amazingly well for both PC and Mac OS applications at the time and into the early 2000s, then they ended up screwing that up too by doubling down on the Windows OS market when they could've been moving towards making Windows freeware, albeit not open source, with just a pay-to-play IT support maket and focusing on where their real profits should've been: application software; I mean they obviously did well enough in spite of that, but they probably could've done as well or better without having to be as ... unpleasant in their business practices as they continued to be; or you know ... still slimy, but at least more consistently competent in ways that don't simultaneously hurt end users, the industry at large, and their own bottom line; but generally speaking, when you can competently and fairly out-compete and dominate the market) Now, something like the first stage (and all the on-rails stages) in Soul Star on the Sega CD would be much more like what you'd get for a more seriously optimized Panther game, except likely with slightly nicer colors (the Mega Drive gives you 4 15 color palettes to work with, but for drawing large scaled bitmaps, they all have to be on the same tilemap or sprite layer and using one set of 15 colors to avoid artifacting; Soul Star puts extra work in to draw some things onto clusters of sprites and some to a background later to get more colors, but it's limited there and the more you use different layers, the more VDP DMA time you eat up for re-loading VRAM with the rendered animation: copying to VRAM is usually the bottleneck for those sorts of games on the Mega CD, you also have to halt rendering during the copy process, which slows things down further). So take that, but do it at 60 or maybe 30 FPS solid instead of 20 FPS if you're lucky on the MCD for the actual scaling effects. And even with the tiny 32 kB SRAM limit, with heavy use of the RLE formatted data and use of 3 color objects (or 4 color background) where useful, you'd save a lot of space in ROM, just not as much as if you could compress those even further and then load them to SRAM. Realtime decompression in software and streaming to small regions of SRAM would also work, but eat up CPU time that would already be limited for a game maxing out the Panther during active screen time. (a game that manages to use close to 100% of bandwidth in a 200 line display leaves less than 24% or less than 4 MHz worth of that 16 MHz 68000's time available ... though granted, for games that avoid sprite flicker and tearing, you'll only hit max usage on a few lines with the max number of sprites, so something more like 8 MHz effective speed wouldn't be out of the question) Also I'd suggest the considered 64kB vs 32kB would help a lot, but really, the next realistic step up would be 128kB. 64kB would use 8 SRAM chips rather than 4, but even aside from the price of the chips alone, there's 2x the board space used, more traces to run, etc ... but in terms of manufacturing cost of the SRAM suppliers: consider this, packaging costs are a big chunk of costs and pin count is a big part of this. 32kx8 and 8kx8 chips both use the exact same 28 pin DIP (or skinny-DIP) packaging, as do 8kx8 and 32kx8 PSRAMs, but on top of that you had 32kB SRAM densities falling into the best-value, highest volume class somewhere in the late 1980s, probably by 1988 (when it was cheaper for Epyx/Atari to use a 32kB SRAM chip in 7800 Summer Games and WInter chames, when they just needed 16kB, as it was cheaper than using 2 8kB chips that would've required a special longer PCB and cartridge size). Plus as a "last minute" change to the Panther, they would have only needed to add a couple bodge wires or make a minor motherboard modification since 32kx8-bit SRAMs use compatible pinouts with 8kx8-bit ones. Hell, for that matter, switching to 32kx8-bit PSRAMs ... and handling the required refresh periods entirely in software via selective code and data location; hell in vblank you could just have a 1kB sound mixing buffer that crosses all 256 SRAM pages and gets read/copied over to sound RAM at presribed intervals ... or hell, if vblank is no more than 63 scanlines long, you wouldn't need any refresh there if all 256 pages get refreshed every active line, so a 200 line NTSC screen is fine, but PAL would need to be extended to 250 lines minimum, even if that just means having blank lines doing dummy all-black/border color Object list reads for minimum refresh, ie still leaving most time for CPU. Also, in Soul Star, the free-roaming sections with rotating floor textures would be more difficult to do on the Panther and not supported in hardware. But the games made by Core Design and Clockwork Tortoise on the Mega CD are generally good examples of the sort of color usage the Panther could do without heavy use of Amiga style palette swaps. (just again, you'd have nicer 18-bit RGB to work with)
  11. @Ricardo Cividanes da Silva On the topic of collecting information on the Panther itself, here's the archived page from the Atarimuseum website: http://web.archive.org/web/20200831194927/http://atarimuseum.com/videogames/consoles/jaguar/Panther/index.htm The software development documents I have are in the folders "Netlists, PLA's and PAL's" (which downloads panther.zip ) and "Panther HW Documents Flare II" downloadable at the bottom of that page. There's also folders with technical schematics in autocad format ( .DWG files) but I didn't have luck extracting those with the programs I tried in linux. Info on recovered Atari ASIC designs and schematics for the Panther in PDF format. http://www.chzsoft.de/asic-web/ http://www.chzsoft.de/asic-web/console.pdf The "GAME SHIFTER" mentioned in that article should be the Panther's line buffer and palette chip, referred to as the SHIFTER in development documents. It doesn't have much in common with the ST or STe SHIFTER other than being designed by Shiraz Shivji (ie lead designer of the ST and head of Atari Corp engineering) who also worked on the design of the Panther Object Processor ASIC, but left the company and moved back to Texas (I believe) before it was finished. It was then in late 1989 that Martin Brennan of Flare Technology was brought in as a contractor to finish the chip design. (this should've been shortly after the production version of the Slipstream chip with a 8086 was completed) I suppose another thing in common with the ST and STe SHIFTER was use of external resistor arrays for the video DAC, having digital 18-bit RGB output on 18 pins from that GAME SHIFTER chip rather than internal DAC and analog RGB outputs as with most consoles and some computer graphics chips of the time. (I believe the Amiga and some VGA implementations already had integrated DACs while others used separate RAMDAC chips with palette and DAC built into that chip) Incidentally, the VGA chipset used in the Atari PCs used an INMOS RAMDAC chip, sometimes called a CLUT chip (Color Look Up Table). This is what I suggested the Panther could've used to save on chip space in the SHIFTER or line buffer chip and reduce its pin count while allowing full 256 colors and same 18-bit RGB colorspace. (you'd only need 8 video lines instead of 18 to feed the RAMDAC and the palette RAM would be inside the RAMDAC as well, meaning you just need the line buffers on a custom chip or moved inside the Panther ASIC itself like the old MARIA chip in the 7800 used) VGA RAMDACs were common, commodity off the shelf parts at the time, so until Atari got to really high production numbers, it probably would've been a cheaper solution overall. Also, looking at the panther schematics in that link, it looks to me that only the upper 8 bits of the OTIS bus are connected to a single 8kx8-bit SRAM. There's additional connections directly to the upper 16 bits of the Panther bus (the same portion the cartridge slot and 68000 use) but I'm not sure if that was just for copying data to the 8kB SRAM or if there were any situations where the chip could access RAM or ROM directly for streaming 16-bit PCM data. Only 15 address lines are connected, which would limit it to 64kB (32k words x16 bits) of address space on the 68k/panther bus. Seeing as it only uses the lower 15 bits of the 68k address bus, that should map it directly into the first 64kB of the memory map, which would be the 32kB of SRAM (or space reserved for up to 64kB of 32-bit SRAM). The use of 8-bit wide SRAM would mean any sound data would be limited to 8-bit and not 16-bit, which makes the use of the OTIS over the older DOC (or DOC II) sound chip even more strange. Granted, had they revised the design based on events of 1990 and 1991 with the DRAM shortage ending and an SRAM-only (or even PSRAM) console being less appealing, it shouldn't have been difficult to fit the OTIS with the 16-bit DRAM it was designed to directly interface to. A single 64kx16-bit DRAM chip (128kB) would probably be the minimum useful amount and minimal space on the board and a 256kx16 bit chip would be the next step up (512kB or 4Mbits) with 1991 bulk pricing being around $17.2 for the latter and maybe $4.4 for the former, but possibly a bit more as the 64kx16-bit chips were a bit more exotic or lower demand while 256kx16-bit ones were common, often used in pairs for 1MB SIMM modules of the 72 pin 32-bit wide, or at least would soon be used as such (those SIMM types were introduced in the early 90s by IBM, but I forget exactly when). The DRAM could potentially be shared as CPU work RAM, but I don't think the OTIS's built in DRAM control logic natively supports Atari ST or Amiga style interleaved bus sharing, though given the bandwidth requirements for OTIS at 10 MHz, it should easily interleave with a 68000 at 10 MHz with the 68k (probably faster) reading through a 16-bit latch. I think the chip already makes used of extra available DRAM cycles for streaming sample data into DRAM during playback, but not for sharing with a CPU directly on its bus, so some sort of external DRAM controller or MMU would be needed for that. While technically less powerful and much less flexible than the "DSP" (32-bit RISC microprocessor copied mostly from the GPU in TOM) in the Jaguar, the OTIS chip with a decent amount of RAM should've easily been capable of more impressive sound than most Jaguar games actually ended up with due to it being an off she shelf part with better technical support and more straightforward use by programmers and musicians, especially those already familiar with Enqoniq keyboards or MIDI synth boxes, or the Gravis Ultrasound card with the relatively similar Ensoniq OTTO derived chip. They just would've needed to put the joystick I/O handling, serial port, clock synthesizer and such inside TOM instead of JERRY or a much smaller and simpler custom IC. The 68000 could've been used as the sound controller like the 68EC000 on the Sega Saturn, which might have meant it worked on the sound bus and off the main bus most of the time, which would've also had the side effect of making it a much better CPU than it was in the Jaguar, for the many games that couldn't offload all the processing onto the GPU. (ie it would've made heavy use of the 68000 for more than just sound and I/O handling much less painful on performance than it was on the Jaguar) Motorola also had 68000s and 68EC000s available up to 20 MHz by 1993 for what that's worth. (they also certainly could go faster than that, at least the CMOS versions do, but Motorola and its Japanese licensees never pushed it past that commercially, probably due to licensing restrictions or at least hypothetical legal concerns and, in Motorola's case, to avoid competing with their own 68020 and higher processors) You had Atari ST accelerator boards using 68000s factory overclocked to at least 36 MHz. (I wouldn't be surprised if every single Atari Jaguar has a "16 MHz" 68000 in Jaguar consoles runs perfectly fine at 26.6 MHz, and nowhere near warm enough to need a heatsink ... it should run much cooler than a 286 or 386SX at the same clock speed, and probably much cooler than NMOS 286s or even 68000s at any clock speed: note NMOS parts consume as much power and run just as hot at any clock speed, unlike CMOS parts that use more power the faster they go, but still use much less power than NMOS equivalents) See average mass market DRAM prices here: https://phe.rockefeller.edu/LogletLab/DRAM/dram.htm I'll also upload the developer documents I've previously saved in odt format, if that's easier to use if you have open office. (the original files are in Wordperfect and MS Word doc format) PDS.odt PANTHER.odt PANHW.odt
  12. I've seen that quote before, I think in one of your posts, actually. I'd chock up a good chunk of that to Sam Tramiel speaking in PR or marketing terms and not being straightforward, and that's even assuming he understood enough about the hardware to make judgments for himself and not just repeat what he'd been briefed on. If we go by what Leonard has been saying the last couple years in interviews, the Panther chips themselves just didn't work, or taking that with a grain of salt based on a 30+ year old memory, at very least there were production problems bad enough for him to form that memory. Sam Tramiel's cancellation memo on the Panther in 1991 also specifies it was due to technical problems, though that's also phrased as "If anyone asks we cancelled it due to technical difficulties." And that could either be truthful or just the official word on what to tell software developers and/or the media. (except Sam contradicted the latter in his own interview) Besides that, based on everything I've gathered on the Panther, it wasn't supposed to be a WOW machine, it was supposed to be cheap like the 7800, cheap enough to undercut the competition and fit into the budget market niche while also being "good enough" to compete on a similar playing field while at least having some potential for wow factor (the sprite scaling was its main impressive feature there). Now, unlike the 7800 it shouldn't have been as bogged down by Nintendo's anti-competitive business practices and 3rd party support should've been easier to get while also riding on the relationships with developers they'd built up for the 7800 and Atari ST and without the roughly 3 year gap in market share they suffered between 1991 and 1994. Even if the Panther had been no more successful than the 7800 in the US, that was still a quite solid 2nd place to Nintendo in 1986-89 and sales up through 1990 were close to 2500% of what the Jaguar sold prior to its liquidation, or total US sales of the 7800 was around 3.7 million vs about 150,000 Jaguar units sold prior to liquidation. Now assuming they needed time to fix the Panther ASIC to actually bring it to market, it might not have entered production until late 1991, perhaps not even a full international launch until some time in 1992. However, I'd argue if they increased the RAM to 128kB and increased the line buffers and palette to 8-bits and 256 colors, and still kept costs low enough to undercut the competition, I think it would've been decent. (switching to a cheaper sound chip might have helped meet that) 32kx8-bit SRAMs (and PSRAMs) have a very similar pinout to 8kx8-bit ones and the same pin count, so even modifications to the motherboard layout would be minimal (just 2 more address lines to RAM) Aside from that, if they were willing to really put more R&D effort into improving the panther in a meaningful, but time-constrained way, Atari could've had the flare team focus on an integrated single-chip Panther that put the line buffers inside the chip (could still use an external VGA RAMDAC to save on chip space) while also fixing the bugs with the existing panther chip and adding support for PSRAM rather than just SRAM with sufficient time in hblank for refresh (you'd need about 4 refresh cycles per scanline or 520 ns for 80 ns PSRAM). To strictly comply with 80 ns PSRAM's 130 ns cycle time rating, a 30.77 MHz oscillator and 15.385 MHz Object Processor clock rate would be needed in place of the 32.2/16.1 MHz speed, but that's also close enough that 80 ns PSRAM should've tolerated the stress, but worst case would've been 15 MHz. Going a bit further would be switching to actual DRAM instead could allow for 256 kB via 2 64kx16-bit chips and all the available chips of that density also feature unusually fast RC times (random read/write cycle times, ie what you'd need for SRAM-like behavior and nothing fancy like page-mode burst access). The 80 ns 64kx16-bit chips available from Hitachi and Toshiba at the time should have worked with the 32.215905 MHz clock rate using similar RAS/CAS/Precharge timing as the ST MMU (just at 32 MHz rather than 16 MHz) or slightly fast RAS and RC times but precharge time within spec at 77.6 ns RAS and 46.56 ns precharge with RC time of 124.16 ns (vs ratings of 80 ns RAS, 45 ns RP, 135 ns RC), 32.0 MHz would stress that a little less, and 30 MHz would be fully within spec, including the 3 ns rise/fall time for signal pulses, though actual RC time would be very slightly less than 135 ns at 133.33 ns. (given the line buffer clock rate is independent of the object processor and CPU clock, they could also change the choice of CPU/OP clock without changing the screen resolution or pixel shape and have developers target the minimum possible clock rate for launch titles, then potentially have more headroom after that if a higher clock rate was achieved) Flare should've been able to basically copy much of the ST DRAM controller logic block of the MMU, but allow the Panther to saturate the bus rather than interleave (like the ST SHIFTER does), but doing hidden refresh cycles on every CPU bus cycle, so in vblank you're getting constant refresh while in active display you just need enough time after Panther DMA to ensure a minimum number of refresh cycles. (worst case, this could be done in software, cautioning programmers not to allow the Panther OP to use 100% of scanline period for DMA) Alternatively, if they removed the external sound chip entirely and added simple DMA sound integrated into the Panther ASIC, the minimum refresh cycles could be nested into fixed sound DMA slots synchronized with the scanline/H-sync timing. (albeit something much cooler would've been integrating the Slipstream DSP onto the Panther ASIC, and also should've been cheap compared to external sound chips, but less foolproof in terms of getting the thing out the door with working chips ASAP) After looking through datasheets from 1989-1993, I also couldn't find any vendors offering 64kx16-bit PSRAMs or SRAMs (that's 128 kB 16 bits wide) and the only 16-bit wide options were the 32kx16-bit (64kB) chips from Sanyo and I think Hitachi and Sharp that Sega used on the later versions of the MD/Genesis. And those would work fine for replacing 4 32kx8-bit chips. So the only obvious low-cost 256kB 32-bit wide option is with 64kx16-bit DRAM chips available in Toshiba and Hitachi catalogs by at least 1991. Now, on the other hand, the Slipstream would be better as a budget oriented console with its floppy disk drive and marketing oriented at the range of budget software titles made possible by that medium compared to carts. (the KMS itself was targeting 15 pounds per game in 1990) But unlike the KMS, could also aim at expanding that to include higher profile games at higher prices as was likely necessary for higher profile US (and potentially Japanese) developers and ports of PC games that required multiple floppy disks. Plus they could include a cartridge slot for combo cart+disk games for reduced load times and fast access to bulk storage needed for lots of animation (like arcade fighting games). You'd gain the potential for more flexible game design and easier/better ports of some computer games (incoluding graphic adventures and flight/vehicle sims not common or not commonly done well on consoles) to offer something different from other consoles and expand on the 7800 and Lynx's business model of attracting computer game publishers. That and offering the Atari ST Mouse as a sandard peripheral (probably using the 15 pin enhanced joystick connector of the STe/Falcon/Jaguar) would make point and click adventure games and Catacomb3D/Wolf3D a lot easier to play, while analog flight sticks or wheels would be good for many sims. (the latter would be a point where Konix would've been good to get in on before they sold off their joystick division and started offering peripherals for Atari's console instead, inclouding the KMS modular controller itself ... which IMO made much more sense as a versatile peripheral multi-purpose controller than built into the console itself) Plus with disks you gain the potential for very cheap or free releases of demo disks and shareware (the latter could use normal PC/Atari ST style FAT disk formatting so as to be easy to copy where full games could use proprietary formatting and encryption, plus better than 720 kB or potentially better than the 880 kB if you're willing to use bulk storage formatting for much of the disk closer to the theoretical maximum 960 kB ... the 512kB expansion would facilitate this as you could more easily load large blocks of data at a time). For the slipstream base unit, Atari would either have to choose to sell hardware at a loss (or at cost) to keep the price point down, or rely on marketing that promoted the inexpensive game options. The ability to use a demo disk compilation instead of full pack-in games could also balance this out somewhat, since you don't have the trade-off of losing profits for a killer app used as a pack-in game, but can still show off demos of the more impressive games available right out of the box. (like if the Jaguar had been CD-ROM based and come with a pack-in demo disk compilation of the best 1994 releases, including a Shareware/Demo version of Doom ... the low cost media would've made ports of some of the other Doom Engine games rather straightforward too, possibly with some further optimization, probably Doom II and Heretic but perhaps not Hexen given the added RAM required, then again Carmak was attempting Quake on the Jaguar, so who knows ... and hell, as silly as it might sound, if Atari really couldn't get CD-ROM at a sane price point by 1994, a High Density floppy drive with potential for bulk formatting at >1.86 MB might have made sense if it actually meant the system got more software, both budget stuff and higher profile stuff, and especially shareware demo releases of PC games; plus you even have the potential for rushed, buggy early releases and cheap or free upgrades to patched versions with proof of purchase) Floppy disks also make for cheap/easy game save functionality. If the floppy drive unit was sold separately, they could fall back on Jack Tramiel's old business model with the VIC-20 and C64 where the base unit was sold cheap and the peripherals were sold at a profit. (in this case, the floppy drive could probably come with the maximum 512kB already onboard) At very least if Atari went with something weird/different like disk based software, they'd be filling a market niche that no one else attempted and with potential for success in that end of things. Granted, Atari COULD have done multiple of the above, like improving the Panther a bit to be more competitive for 1992 while still cheap, but include provisions for expansion with plans to use a further development of the slipstream chipset as part of a Floppy Disk or CD-ROM expansion module and the Jaguar itself to be the bigger, more complex full next gen console. (granted, if they aimed at backwards compatibility the Jaguar would've likely been somewhat different, at very least with different RAM configuration and Object processor ... though for the latter, they could've just extended the Panther's modes to include the 16-bit CRY color mode and page-mode DRAM reads at a bare minimum, probably with 16-bit RGB mode as well, but wouldn't need the 24-bit color mode or 32-bit wide line buffers ... though I'd argue a 16-bit 4444 RGBY color mode would be quite useful for allowing 12-bit RGB color with 16 shades mapped into 24-bit colorspace, for easy use of the existing 8-bit and 4-bit functional blocks on the blitter and most of the existing CRY to RGB conversion logic, but allowing for blitter based colored lighting and blending in 12-bit RGB with 4 bits of brightness/black level Y intensity and 16 linear shades for 3D lighting effects without the dithering or posterization you'd get with 15 or 16-bit RGB, just not as good as CRY itself allows: ie for games that only need fade-to black and use black distance fogging like Doom, Quake, or Tomb Raider, CRY is best, but if you need colored lighting or want colored or white fogging effects, RGBY would be better) Technically speaking, given the existing 12-bit RGB format of the Slipstream but use of full 16-bit wide CRAM entries, extending that to 16-bit RGBY, adding a direct 16-bit color mode and 4-bit logical shading on the blitter would probably be one of the simpler upgrades to more impressive 3D. Allowing it to also work at 8bpp for 256 colors, provided the palette was organized as 4+4 bits (with 16 colors x 16 shades) the same logic could be used to provide shading in 256 color modes, just with those constraints on the palette. Or save CRAM space (and give it to the DSP as extra RAM) with only 16 CRAM entries used for 16 colors and the 16 shades provided in hardware mapping into the 16-bit RGBY color space. Adding translucent blending would probably be a little more expensive, but adding a dither option (just rendering every other pixel as transparent) would be a cheap workaround useful for scaled bitmaps and polygons and a lot better than nothing, or having to use slow CPU rendering to do dither effects. (you also can't just use dithring at the texture/texel level since you'd get ugly scaling artifacts and corase dither meshes ... this option only works well for rendering unscaled sprites/objects and could already be done on the Slipstream of 1989 in the sprite rendering mode since you have per-pixel transparency by using the blitter's mask register to reserve one color value as transparent)
  13. Crazyace addressed this at one point, I think even pointed to further elaboration on Gill's part, but in any case, 5 FPS would be severely underselling the capabilities with some caveats. With the 8088 based development systems, performance was well below the production 8086 version, partially due to the CPU bus width, but mostly due to the lack of a bus latch, which meant the CPU could only access the bus outside of video DMA (during hblank and vblank) rather than interleaving with it like the ST or Amiga. (and even then, the CPU actually only used half the free PSRAM access slots at 6 MHz; a 12 MHz 8086/V30/80186 or 68000 could have worked just as well on 4 t-state boundaries like the ST does at 8 MHz) A 68000 probably would've been the cheapest option for a CPU that handles programs written in C reasonably well, I think even the 286 requires a fair bit more assembly optimization by comparison, at least compared to its performance on paper. (If you're doing mostly 16 bit operations or less and optimize reasonably well for x86 segmentation, a 286 should be significantly faster than a 68000 at the same clock speed with 0ws in both cases or even with 1 ws on the 286; both have 2 clock tick access time requirements, but the 68k cycles the bus ever 4 ticks where the 286 does it in 2 as well with 0ws, and most instructions execute in 3 ticks on the 286 vs 4 on the 68000, some are only 2 clock ticks on the 286 and prefetch is filled in 2 tick bus cycles at 0ws; but that doesn't say much about compiled code performance or how much compiler optimization is needed for comparable performance ... plus PC programmers working in x86 assembly language would still probably have an easier time porting things to 68000 than the reverse situation) If the context was for doing Star Fox as close as 1:1 as possible to the SNES, with the full 2D background layer (with pseudo-rotation via character column scrolling) and having the DSP do comparable interpolated + reverb effects on sample based music, the 5 FPS argument might be valid as well, or perhaps even without the reverb but still doing interpolation on 8 channels with 32 kHz sample rate. (you could do something that sounds close-ish with much less overhead, or even better sounding if you use larger, higher quality uncompressed samples, but the latter runs into a RAM space issue more than anything else) That and, while the 6 MHz 8086 would've been a bottleneck for some things, that's also part of the chipset that should've been easiest to change, including switching to a 68000 as Atari may have wanted to do (and probably use an Atari TT based development system like they did with the Panther: 256 colors from 12-bit RGB would cover general graphics there and 68k compatible CPU ... otherwise just use VGA PC based development systems like the Multisystem was using) Working within 256 kB of RAM and purely from floppy disk would also be a constraint where ROM cartridges would allow more flexibility there even with the same 512kB limit as Star Fox used. At some point in the Multi System presentation videos during polygon rendering analysis, there's mention of Starglider II running at 50 FPS as well. (I think that may have been guys from Argonaut commenting there, even, as they were developing for the system and rather excited about the potential for 3D) It should've easily handled very nice/impressive versions of the Hard Drivin' based aracade games of the period, too. (Atari Games was an entirely separate company from Atari Corp, but there was still a vested interest in cross-licensing or publishing for Atari systems, and there were Atari Lynx versions of several of those games) Beyond that, there's the issue of maxing out the DSP vs using it for sound processing, and using it for FM synthesis would eat a lot of its processing power, as would doing something akin to the SNES's sound system with streamed ADPCM samples that also get interpolation and reverb buffer mixing. Adding another sound chip would offload a lot of that, but even aside from that, you could do something much less DSP intensive, like Amiga MOD (or similar sample based tracker music) that just does 4 channel sound, possibly with extra channels for sound effects. Doing a DMA sound routine that minimizes DSP RAM use as well as processing time shouldn't hurt 3D processing too much, but adding a separate dedicated sound chip would help a lot. (a cheap OPLL YM2413 + DSP just slaved for simple PCM playback would make something close to PC Sound Blaster capabilities and probably with better sounding music than many VGA games if you had the talented UK/European chip tune composers handling the music) Atari had a deal going with Ensoniq at the time, which might have conflicted with interests in using Yamaha chips, but without a decent chunk of DRAM onboard, those Ensoniq chips aren't all that impressive and even with the DRAM, if using cartridge ROM, you still have to store the samples in there first and atari probably would've skimped on ROM sizes due to costs. (floppy disk would've avoided that issue to some extent, but made use of RAM tighter) But having an FM synth chip onboard allows much lower RAM or disk/ROM storage space use with still having the potential to use sampled sound on the DSP when desired while also being very much industry standard for home computers and arcade games at the time, not to mention synthesizer keyboards (the OPLL itself was used in low-end keyboards, too). OTOH, if they used floppies and filled up the 512kB DRAM block on the Slipstream, you could buffer snippets of streaming audio there and loop it, either using one long looping track or segments you play in different order somewhat like single channel MOD/tracker music, or 2-channel MOD/tracker music (separating melody from percussion, for example) using more RAM than you would for typical sample tracker music, but a lot less than one long stream (or at much better quality than one long stream). Granted, for 2D games, you'd have all that RAM to potentially use for tiled animation to do well animated and/or pre-rendered parallax (do blitter block copy to quickly build the background and do multi-layer scrolling as fast as a simple non-animated single layer scrolling background) This would also work even better in 16 color lowres mode since you can copy 2x as much graphics data per word and use less video DMA time and framebuffer space. With just 256kB of PSRAM and floppy disk media, you could compare the Slipstream to the PC Engine Super CD (with 256kB system card) and the same sort of dynamic tiling tricks for parallax layers would be relevant, with the exception that the Slipstream uses the DSP to handle floppy disk transfers where the PC Engine could stream CD-ROM data while playing chip synth music to dynamically update a cache/buffer of tile data in main RAM. Again, a separate sound chip would avoid this issue, allowing more flexible use of DSP multi-tasking (or task switching) while relying mostly on chip synth music and sound effects. (I think you could do sampled sfx and/or percussion easily enough by embedding a PWM sample playback loop in with a floppy disk DMA routine, so you could still do some samples on the DSP without needing a separate PCM chip of any sort) 512kB of DRAM would've made ports of most pre-1994 VGA DOS games relatively easy as well and DSP+blitter assisted rendering wouldn't just be good for polygonal 3D but scaled sprite/bitmap stuff like Lucasfilm's Secret Weapons of the Luftwaffe series or Wing Commander 1 and 2, or Wolfenstein 3D. Even with just 256kB a number of games should've been able to be reworked to fit within the limits, though ST/Amiga ports would be easier than PC games requiring significant hard drive installation on 3 or 4 or more floppies. Expanded RAM + floppy media would've offered potential for a lower budget multimedia rich experience, allowing cutscenes and/or voice acting only really seen on CD-ROM based systems, PCs, or some disk-swapping heavy Amiga games. More RAM = fewer loading sequences, and either less disk swapping or less need for redundant data on disks (frequently or globally used code and data just gets loaded with the main boot disk and stays in RAM). On the 2D end of things, something like Jazz Jackrabbit would be a good example of what the system should be capable of in 256 color mode with sample based sound on the DSP. That game doesn't really do any parallax at all, just focusing on smooth scrolling of colorful backgrounds and well animated sprites, so it pretty well fits the bill of the Slipstream's strengths without using tricks like pre-rendered parallax. (the 3D/pseudo-3D bonus stages also should've been doable on the system) RAM requirements would've honestly been the biggest factor there compared to what Jazz requires on the PC. In that last respect it's more in the Jaguar's sort of system requirements as far as an actual port of Jazz would go (the 1994 release date fits with the Jaguar timeline as well ... even if they'd foregone the 1993 test market and refined the hardware and built up a larger library before the release). For that matter, it's very much in line with what the Falcon should've been capable of. Jazz's tracker music really isn't that intensive either and not something you needed a GUS or the Falcon's DSP for. A 386SX-33 or 40 could handle it at max sound settings without noticeable performance loss, though running at max graphics settings does take more of a hit. (the main difference is more priority layers for foreground vs sprites vs backgrounds from what I remember) Also, like the Panther, the Slipstream would be most impressive in 2D when working with relatively large sprites as maxing out closer to (or ahead of) what the SNES or Mega Drive could do. OTOH with a blitter you can also do a lot of line drawing or pixel particle effects that you'd only have with the Panther if more RAM was used (software rendering to a framebuffer). The Panther could do scaled sprites way faster than any of the others, but the Slipstream's DSP and blitter would accelerate software rendered scaling+rotation or texture mapping more like what the Mega CD does in hardware, but in 256 colors like the SNES's Mode 7, and possibly even similarly fast to the Mega CD games that used that feature. (the Mega Drive VRAM DMA bottlenecked framerate a lot, even if the Mega CD Gate Array can render relatively fast ... I think it actually does scaling/rotation faster than the Jaguar blitter, but limited to rendering 16 color tiles loaded into a "stamp" cache for 8x8 or 16x16 textures and rendering them in Mega Drive tile format)
  14. Leonard Tramiel seems to be more talkative the past year on his time at Atari, including the period with the Panther and Jaguar development with the Flare guys. I haven't seen anyone ask him about the Slipstream or Multisystem, and it'd certainly be interesting to know if Atari ever considered licensing that chipset for their own use (as an alternative to the Panther) or localizing the entire Multisystem. Or for that matter, given the funding and manufacturing logistic issues Holloway had with actually bringing the Konix Multi System to market, if he ever approached Atari (or at least Atari UK) to partner with the UK release as well. On top of that, the few developers who had access to working Panther development systems also seem to be ones who'd been developing for the KMS. It seems pretty clear that Konix had a non-exclusive license for the slipstream chipset, so Atari could've used it for its own purposes without legal trouble there, unless maybe Atari insisted on having exclusive rights to the hardware. With the DRAM cartridge limited to use as an add-on, the KMS itself already fit the same bill as the Panther in the "no DRAM" design goal that prompted the Panther's development. It used PSRAM, but that would still be outside of the DRAM commodity market with severe shortages in 1988/89/90 (though this improved significantly by the end of 1990), plus the US import price floor restrictions on Japanese DRAM was specif to actual DRAM chips, not PSRAM. (that issue was also avoided by Atari's overseas assembly plants, but would've restricted the ability for them to expand assembly to the US, which they were interested in doing in the late 80s, plus domestic production makes smaller scale test market sorts of volumes or iterations of pre-production prototypes a lot easier and you'd want to use the same mix of component manufacturers for that as for final production to ensure reliability) The only people I've seen post interviews or even just short correspondence with Leonard seem to be mostly focused on the Atari stuff or maybe Atari and Commodore without much knowledge of the overlapping work with Flare and Konix in 1987-1990, or if they're at least aware of the Multisystem, they haven't read deep enough into your website to realize the potential for other companies to make sue of the Slipstream ASIC.
  15. @Lostdragon One other possibility that might fit in with multiple sketchy memories was that some Panther object processor chips worked, but yields were extremely poor to the extent only a few development boards ever worked. They could've sent out the few working ones while hoping yields would improve (or fixes could be made), but it never happened. If those sorts of problems were ongoing throughout 1990, I could understand the sketchy memories of the time. It could have been something like: a few development systems end up working OK, they send those out, then continue with chronic problems of most examples not working in any useful sort of way. This would contrast with the first run of Jaguar prototypes with Tom version 1 onboard (that "taped out" in late 1991 came back in silicon in early 1992). Those TOM chips were had significantly worse bugs than the second revision used for production in '93, but they at least appear to have worked without significant yield problems. The production units also used 26.6 MHz vs 32 MHz (probably 32.215905 MHz) heavily implied/planned in the documentations, but it's not clear this was a yield issue or related to keeping costs down elsewhere, like avoiding use of heat sinks or better case ventilation, or the DRAM speed chosen, or ROM speed for that matter, especially since JERRY has a fixed bus cycle time of 6 clock ticks. (albeit in that case 30 MHz would have made more sense to be used with 200 ns 16-bit ROMs, which would mean JERRY could load sound data from ROM at its full speed) It seems like Atari was having problems with their oversees manufacturing/assembly plants as well (I know there was the big one in Taiwan, not sure about others or if the Taiwan plant was the problem one), but per 1992/93 era Atari magazine articles I've come across, faulty quality control testing equipment contributed heavily to delays in the Falcon's release. In that case it was failing to pass perfectly good hardware (assuming the magazine articles are accurate) that would've otherwise allowed the Falcon to be released months earlier. So if they had problems like that, it could have contributed to problems trouble shooting other things. The articles also mentioned Atari had been soured on their overseas production facilities to the extent of abandoning them (or considering abandoning them), but I'm not sure of the full extent of all that. If they did shut down their big Taiwan plant, that probably would've significantly increased their assembly costs for the Jaguar ... it may have also driven their decision to source a Phillips chipset for the Jaguar CD-ROM drive rather than one of the Japanese ones, though they also opted to license the chipset rather than buy it off the shelf, which wa slikely costly given the small numbers the Jag CD units were produced it, or even the Jaguar itself (around 235,000, with about 150,000 sold by 1996 iirc). I know the Sega Mega CD used a Sanyo chipset in it rated for both 1x and 2x CD-ROM speeds, though Sega used it with just a 1x speed drive mechanism. I forget whether this was from ST Format or ST User magazine, but they're both archived pretty comprehensively online. From the pictures of Panther dev hardware I've seen, the Panther ASIC itself was manufactured by Toshiba in Japan. As I recall, there were even some versions of the Jaguar chips that are Toshiba marked, though many are Motorola marked. I think IBM made the actual motherboards for the Jaguar. In any case, it doesn't explain the lack of apparent consideration of Flare's Slipstream ASIC as an alternate option, especially as the Konix Multisystem had more developers already working on it in 1989 than the Panther had in 1990. The Konix deal was pretty clearly a non-exclusive license and I don't think even covered any distribution outside of the UK, so Atari should've been free to adopt it or even have it reworked/expanded, though some changes wouldn't have required modifying the existing production-ready ASIC at all, like switching from floppy disk to ROM cart (potentially with provisions for a floppy drive expansion) and bank switching to expand the limited address space. It was definitely less powerful for producing 50/60 FPS 2D games than the Panther (let alone 50/60 FPS scaled sprites/objects) but would've been a lot less exotic to work on for most home computer game developers, even if they'd stuck with the 8086, and the 256 color framebuffer would be much simpler to make good use of color in than the Panther's 32 colors. (albeit that was a limitation of the line buffer + palette RAM and not the object processor itself, which already supported a 256 color palette and 1, 3, 15, or 255 color objects at 1, 2, 4, or 8 bits per pixel) It would've been miles better in 3D than the Panther, though adding a sound chip other than the DSP to the system would've made that a lot more flexible ... also a lot less bottlenecked with a faster CPU. (with the way wait states are set up for bus sharing, it seems like you could just drop in a faster CPU and synthesize the appropriate clock input from the Slipstream's source clock, so long as the CPU is still slow enough to work with 0 wait states in PSRAM, since it doesn't look like the Slipstream ASIC adds any wait states for that outside of video DMA time or PSRAM refresh time ... which would mean with the 100 ns PSRAM used, with 160 ns cycle times, you could use: up to a 20 MHz 68000, 25 MHz 8086/186/NEC V30, 12.5 MHz 286 or 386SX or NEC V33, or 18.75 MHz 68020) The NEC V33 would've been an interesting option as it's faster than a 286 internally for a number of things, but lacks 286 protected mode and accesses its 24-bit (16 MB) address space via LIM EMS 4.0 compatible bank switching (ie the standard EMS system a number of early/mid 90s DOS games used, though others went for actual 386 specific memory mapping modes). It was basically a really fast 186/V30/286 Real mode compatible processor, but one incapable of being used with any protected mode applications or OSs (like windows 3.x), and the pinout wasn't 286 compatible for some reason, but it should've been cheaper to compete in its market niche. It also completely avoided Intel microcode related legal issues by not using microcode (all instructions are implemented in wired logic), though I don't think that was ever a big problem for obtaining NEC's V20 or V30 chips, no import restrictions of cease an desist orders, rather they kept selling them but ended up having to pay Intel royalties. (unlike UMC, who was barred from selling their enhanced 486 class U5S Green CPU in the US) On a side note, it SHOULD have even been possible to incorporate the Slipstream into the Atari STe or TT at the time, though the latter would make less sense as it wasn't capable of VGA display modes (it would have to run about 2.1x as fast to be useful for 640x480 60 Hz VGA and use 80 ns SRAM rather than PSRAM, if it could actually be clocked that high with any sort of useful yield). The trick would be enabling its external sync inputs and using the STe's sync and display enable outputs to drive the display parameters for 320x200 in 16 or 256 colors or 640x200 in 16 colors. However, doing so would disable its hardware scrolling functions, as it's limited to using addresses along 256 byte boundaries like the original ST SHIFTER (though also supports 16-bit word wise horizontal scrolling), so its scroll features only work at 256 or 512 pixel width modes. Still, it uses chunky pixels rather than bitplanes, so software scrolling is a lot easier than on the ST, but for best performance, you'd want to use the native 11.93 MHz clock and 256/512 pixel resolution (up to 512x256 50 Hz) for dedicated games or graphics software. But the point being it COULD still be made to work at standard ST resolutions and was capable of bit to pixel expansion for character painting (so could have used the existing TOS character/graphics set using 1-bit bitmap data). It could've even been made to genlock as an overlay for STe SHIFTER video, but that would've required external analog video mixing circuitry to achieve and wouldn't have been the cheapest/minimalist option. (granted it would easily allow a 16 color STe background with 256 color Slipstream sprites and/or polygons, very much like the Sega 32x does for the Mega Drive) Mind you, to match STe pixel clock ouputs, the Slipstream would need to run at 16.108 MHz rather than 11.93, but that seems at least realistically possible compared to VGA clock rates. That or the STe SHIFTER compatible overlay mode would be limited to just 320x200 16 colors and the Slipstream would be clocked down to just 8.05 MHz instead, with 256 color mode limited to 160x200 as such. (but support the full 256x240/256x256x8bpp 256 color mode in native 11.93 MHz cock rate ... 11.823 MHz for PAL) Without bothering with genlock support, the Sipstream DSP and blitter could still be available when only using STe SHIFTER graphics (though for bitplane graphics the blitter would mostly just be useful for block fill and block copy operations) and the PSRAM could be available as local CPU bus fast RAM for software supporting it. (a 16 MHz 68000 would easily work in it, though unlike the cache of the MEGA STe, you wouldn't have immediate acceleration of all software ... TOS could be patched to support fast RAM though and allocate it as such) I've never seen interviews with the Flare team members or Atari staff addressing whether the Slipstream hardware was ever considered.
×
×
  • Create New...