Jump to content
IGNORED

Why CRY?


kool kitty89

Recommended Posts

However, having RGB (for the consumer models at least) really wasn't important (even in mainland Europe it was only really popular in select regions -especially France), so having video directly output as Y/C (with no need for an external video encoder) could actually be MORE cost effective. (ie no need to buy off the shelf video encoders -and no need for the added board space to accommodate them) Except that's assuming you dropped the RGB modes entirely in favor of optimizing completely for Y/C graphics.

 

Not sure where you are getting this from? RGB (or SCART) type cables are available and popular for pretty much every console in the UK and I dare say the whole of Europe pre-HD consoles. I had SCART for every console I own, it is the 1st addon I would purchase before a second controller, and that is not just me, the cables were often bundled in with new units, or displayed in quantity along with the other accesories in the stores.

No, I mean it wasn't very common to have higher end TVs with SCART input in general until the late 90s (at the earliest). At least from the comments I've seen from UK gamers of the time, a TON of users were still working with RF-only TVs in the mid 1990s. (a high percentage in the US too, but probably not as high -though a ton of people also probably had composite but no S-video inputs -and, really, prior to DVD becoming popular, game consoles and computers were among the few TV-capable devices to even bother supporting anything better than composite)

Plus, didn't the UK (unlike mainland Europe) also sell TVs with RCA connectors for composite video and audio? (vs Europe going exclusively to SCART for all non-RF input to TVs until HDMI)

 

I suspect that the posts on the forums were from idiots or people that didn't have terribly high end TVs or didn't know what a SCART socket was. I had a cheap as chips Hinari TV set it had a SCART, the family main room TVs had SCART from the mid to late 80's possibly sooner, none of which were terribly high end. High end sets usually had multiple SCART sockets on them.

 

But why on earth would you want them to change from a high quality image to some horrible mush? it makes no sense

 

I'd also gotten the impression that there were serious complaints over the Saturn (and I think PSX) launching in Europe with no RF adapter pack-in. (and led to Nintendo's choice of bundling an RF box with the N64 when it launched in the UK -vs only composite AV cables for the standard US set)

 

Can't say for the Saturn, but PSX definitely came with a modulator. Manufacturers will include the bare minimum so that the device can be connected to the majority of units, I think they have to do this by law in the UK.

 

I think the French Jag doesn't have a modulator? (I may be wrong on this, but I have one that has no modulator or space for one, I have assumed it to be French version)

Mainland Europe is a separate issue (especially France) and some countries mandated SCART for all TVs during the 80s. (Fance was the first to do so, making SCART a requirement on all TVs sold in France by 1980)

Thus, it's not surprising that RGB-native devices were released in France with only RGB connections.

 

The French Master System II was made with no RF modulator or composite video output, only the RGB and audio lines connected on the AV port. (I think the French versions of the N64 were also the only models to support RGB -without modding)

 

Not really sure what the problem is here. RGB = good picture, solid, stable, clear. Why would you want RF? composite is OK, but just that.. OK. and the non-RGB component I have never seen in a TV set yet.

Link to comment
Share on other sites

Were ray-casting engines even being used back then? I'd thought those didn't become common until the early 90s (with a handful used in the late 80s -along with many other developers trying to do similar effects using 3D polygons -I think MIDI Maze used ray-casting though)

 

I know there were the fractal engines and several fully pre-rendered corridor/maze/dungeon crawler games (like Tunnel Runner), but not actual ray-casting engines.

 

According to the YouTube description "Game made by Paul Edelstein in 1983." You can even see the grid map at the bottom of the game which indicates some use of a raycasting scheme.

http://www.youtube.com/watch?v=5ByUh18MjUU&list=PL599E87C2159EE706&index=39

 

Raycasting actually goes all the way back to the early raytracing days when MAGI was started using it.

 

Flare had to look at things from the perspective of 1989-1991 (when the Jaguar's core logic designs were being laid down -mostly in 1990), so a ton of guess work and foresight were necessary on their part. They had 2D games to consider, and the limited polygonal arcade and computer games of the time as well as pseudo 3D (though the height map engines -Doom/voxel games- had yet to really emerge in the mass market -Wolf 3D and Commanche really started that).

 

On top of that, Sega was designing an arcade board, something totally exclusive to their in-house software teams. Thus they had no precedence for catering to market standards as 3rd parties (generally) wouldn't even be using it.

And in that respect, Flare had a much better perspective for 3D game/hardware design . . . as did pretty much any significant R&D team that worked on the level of mass market computer/console platforms of the time.

 

Well that was some what the point I was trying to relay... Texture mapping and smooth shading probably wasn't considered in the Jags design therefore it would be somewhat of a skewed view to get the Jaguar to do the kind of graphics that a PS1 could do as far as 3D environments having texturemapping with smooth shading effects as part of the artistic look that can be found in games like Ridge Racer. I don't think those kinds of consideration were really put on the table when the Jag was designed. Although the industry standards were out there, everyone seem to be doing their own thing so the industry standard didn't seem as clearly defined until the PS1 dominated the marked in 1995. Didn't want to go too much off topic either, but it's still very much relavant to the discussion because the Jag just wasn't designed for texture map and smooth shading.

 

The Neo Geo used 4-bit graphics (15 colors per sprite) like most other 2D arcade boards in the late 80s and early 90s. It has 256 palettes of 15 colors each.

 

I'm not talking simple checkerboard dithering for flat shading, but far more complex realtime dithered interpolation like X-Wing and Tie Fighter used. (as well as the Playstation GPU and many PC accelerator cards -I think some PS2 games also used it)

 

Actually, X-Wing/Tie Fighter seemed to use more intensive dithering than the simple pattern-block/bayer dithering used in the PS1 and most PC video cards of the late 90s. It looks more like floyd-steinberg dithering.

http://media.giantbo..._wing_super.png

http://images.wikia....EFighterDOS.png

http://www.btscene.e...Collection.html

 

I know... Floyd Steinberg dithering is evident in most of the SNK images. The color transitions looks to be manipulated very easily by a CPU if done right. The PS1 used a standard where texturmap images are 4-bit graphic jpeg compressed for quick display on 3D objects. It's one of the reasons I mentioned SNK because they seem to have been using dithering very cleverly where their sprites only so small enough; they were able to use zoom effects in their games. You also mentioned the Sega CD having rotation and skewing effects simulating 3D perspective, which is commonly used in texturemapping... It like I said earlier, the Jags best option is to look at how dithering, zooming, skewing, raycasting and voxel (2.5D stuff) basically all of the low-end simple effects used on previous systems; all of that is still available to use today in a more agreeable way to produce great looking 3D on the Jaguar.

Edited by philipj
Link to comment
Share on other sites

Well that was some what the point I was trying to relay... Texture mapping and smooth shading probably wasn't considered in the Jags design therefore it would be somewhat of a skewed view to get the Jaguar to do the kind of graphics that a PS1 could do as far as 3D environments having texturemapping with smooth shading effects as part of the artistic look that can be found in games like Ridge Racer. I don't think those kinds of consideration were really put on the table when the Jag was designed. Although the industry standards were out there, everyone seem to be doing their own thing so the industry standard didn't seem as clearly defined until the PS1 dominated the marked in 1995. Didn't want to go too much off topic either, but it's still very much relavant to the discussion because the Jag just wasn't designed for texture map and smooth shading.

Not really. As I mentioned above: flare considered smooth shading to be the wave of the future, and made it one of the defining features of the Jaguar. That's one of the main reasons for having CRY at all, highly optimized gouraud shading. (that was a decision made when g-shading was mainly limited to 3D worksation graphics)

 

The Jaguar was definitely not designed with flat shading in mind as the common rendering technique.

 

Flare, unlike Sony, 3DO, and Sega, also added hardware Z-buffering to the system and a general purpose "GPU" (the only other graphics processor in that vein at the time the TMS340 series, and later the RSP in the N64).

As Kskunk pointed out at one point, the Z buffer logic takes up more chip space than PSX-speed texture mapping logic would have (specifically, having 64-bit read and 256-bit write buffers for texture mapping). The caveat would be doing z-sorting in software (namely using the GPU -3DO, PSX, and Saturn use the CPUs)

 

That was just one of those things that Flare just made the wrong guesses on. Computer games weren't notably using texture mapping until 1992 (and not really for polygons until '94) and the first texture mapping arcade boards appeared in 1993. (I think the Sega CD was actually the first home console/computer to support affine texture rendering of any sort in hardware -and it's rather like the Jaguar's texture mapping in that it was aimed mainly at scaled/rotated objects but is technically full affine line rendering -and can be used to texture map polygons rendered/filled on a line by line basis)

So, at the time, 3D workstations would have been the only major example of texture mapped 3D at all.

 

Plus, as it is, there are ways to get fairly fast texture mapping routines working on the Jaguar. (like rendering to a buffer in GPU SRAM and then copying that "tile" out to the framebuffer when it's complete, or going a step further and caching several pixels of a texture in GPU RAM and rendering to the GPU buffer from there) That will also reduce bandwidth consumed in main memory (64-bit and FPM accesses are used rather than slow single-pixel DRAM reads/writes, and more time spent working in SRAM) so the OPL, DSP, and 68k have more bus time.

Those routines should actually be faster than the 3DO, though probably slower than the Saturn. (definitely slower for actual polygon rendering -more set-up time and overhead than the Saturn's hardware quad renderer- though for rotated objects it might be closer)

 

I know... Floyd Steinberg dithering is evident in most of the SNK images.

I wasn't talking about dithering (or stippling) in art, but realtime dithered interpolation for shading effects.

 

Dithered and stippled computer/game art/graphics have been used since the 70s, that's hardly anything new and certainly not resource intensive. (the dithering is all pre-rendered) Simple bars or checkerboard dithering also aren't very intensive to do on-the-fly for solid shaded objects, but using dithering for smooth shading is a separate issue.

 

 

The PS1 used a standard where texturmap images are 4-bit graphic jpeg compressed for quick display on 3D objects.

What? The PS1 only used JPEG for FMV and static screens AFIK. (JPEG also tends to work badly for small images anyway, and "4-bit JPEG" makes little sense)

The textures in the PSX were stored as 4, 8, and 16-bit bitmaps in VRAM (not actually VRAM, but I mean the video bus's RAM). The decompression ASIC also supported RLE textures, but that would be of limited use. (if you could spare the CPU time, other lossless schemes could be used -LZSS/LZW based schemes might be useful, or more specific schemes for animated textures/sprites -there's a very simple lossless 4x4 block vector quantization sheme implemented on the Genesis that's simple enough for even the 68k to decode it in realtime, though compression is only ~2:1)

 

It's one of the reasons I mentioned SNK because they seem to have been using dithering very cleverly where their sprites only so small enough; they were able to use zoom effects in their games.

Yes, careful dithering can allow (limited) scaling without odd artifacts. The fact that the Neo Geo can only down-scale makes the parameters more limited too. ;)

 

But, again, realtime dithering is a different case in general. (though simple bar/paired pixel or checkerboard dithering is far less intensive than using ordered -let alone diffused- dithering for shading -again,the PSX uses ordered dithering in hardware to shade smoother than 15-bit RGB normally allows . . . at the expense of looking grainy/artifacted -especially since there's no threshold to control how much dithering is used and the effect is full screen -you can select it on a per-object/polygon basis)

 

You also mentioned the Sega CD having rotation and skewing effects simulating 3D perspective, which is commonly used in texturemapping...

It's not like texture mapping, it is texture mapping. The Sega CD graphics processor uses affine line rendering to generate the scaled/rotated objects (very similar to what the jaguar does, but limited to 4bpp . . . also similar to the Saturn and 3DO but without the support for projecting 3D warped quads). To use that for texture mapped polygons, you have to do similar things to software rendered 3D or semi-accelerated 3D on the Jaguar: that is, set each line of a polygon separately (calculate the end points and then fill the line) with the main difference being that you're filling with a line of texture rather than a solid (or shaded) color.

Link to comment
Share on other sites

I suspect that the posts on the forums were from idiots or people that didn't have terribly high end TVs or didn't know what a SCART socket was. I had a cheap as chips Hinari TV set it had a SCART, the family main room TVs had SCART from the mid to late 80's possibly sooner, none of which were terribly high end. High end sets usually had multiple SCART sockets on them.

The argument was never whether SCART was well known or readily available, but rather, how many people actually had SCART capable TVs in 1993-1996.

 

And definitely in the US, most people had composite or RF (even though S-video had been avilable on higher end sets since 1987/88)

 

But why on earth would you want them to change from a high quality image to some horrible mush? it makes no sense

Since it would make little difference to the average user (or all users in the US -if we're looking from that perspective).

 

Let alone, the sort of quality that could actually be had via S-video with a good encoder and TV. (the Jaguar's NTSC encoder seems rather mediocre . . . composite has some pretty nasty chroma moire -worse than what I've seen on the CXA1145s used in the early model Genesis- though dot crawl isn't too bad -Saturn, PSX, and N64 definitely have better S-video, probably modded Genesis too -SNES is harder to compare due to the lower resolution) RGB/Component is far more important once you get to higher resolutions, but not nearly as big a difference from the common ~320x240 res stuff. (generally a 5-7 MHz dot clock )

 

And anyway the idea was to use a more flexible/optimized colorspace without unreasonable cost overhead (remember, the Jaguar was a cheap/affordable system first and foremost). Hell, they could even add provisions for an external RGB adapter for higher quality than S-video. (with the added cost passed on to said adapter)

 

Can't say for the Saturn, but PSX definitely came with a modulator. Manufacturers will include the bare minimum so that the device can be connected to the majority of units, I think they have to do this by law in the UK.

Composite was common enough in the US for consoles to switch largely to that by the time the N64 launched at least (possibly earlier). RF adapters were accessories. (and if you at least had a VCR, you probably already had your own composite=>RF adapter ;))

 

Not really sure what the problem is here. RGB = good picture, solid, stable, clear. Why would you want RF? composite is OK, but just that.. OK. and the non-RGB component I have never seen in a TV set yet.

I'm not sure what you're getting at.

I'm just trying to pose what the most common transmission methods were for users at the time. For France, they obviously had RGB commonly by the early 80s, presumably some other parts of mainland Europe as well, but it seems like it took far longer for the majority of users to adopt SCART TVs in the UK. (the context was that a very large percentage of UK homes didn't have scart capable TVs in the mid 1990s, or if not in homes in general, not for the TVs used for game consoles -say if a child had their own -low end- TV rather than game systems all sharing the main household TV)

 

 

But as for RGB in general, aside from it being nice to have for computers/consoles and such, I really don't see why it was even pushed like it was in the 80s (or even early 90s) since there was very little it could be used for unless you wanted to connect certain computers and (later) game consoles to it. (sort of a niche application)

VHS, SVHS, and Laserdisc all used composite or S-video (and SVHS and LD were very niche while VHS had no use for S-video -LD often didn't either).

So for home use, RGB was actually rather pointless to have for a TV in a home entertainment set-up. (like S-video was in the US until DVD got big -at which point YPbPr became common as well -though I'd argue they should have gone RGB instead since it was a more common analog standard)

Link to comment
Share on other sites

Again you are missing it. Apart from the very very first tv we had at home (a b&w tv) the rest all had scart (mid 80's). And a small trivia, Greece at the time (as now) was much below the european average in everything..

You are clearly misinformed about that issue, and your anecdotal evidence are not as good as mine ;)

Edited by Christos
  • Like 1
Link to comment
Share on other sites

I don't know where you are getting the idea that most TVs didn't have SCART mid 90's.. you would have to be buying bottom of the range utter crap to NOT got SCART in the UK mid 90's. Also VCRs have SCART too at around the same time. Then there are the Satelite TV systems which again were SCART based. So yeah, there are plenty of devices that used SCART in the UK in the early 90's onwards, excluding computers.

 

I'd assume providing RGB direct from the hardware as seems pretty common on all computer hardware of the time direct to a display device is a cheaper solution and requiring less electronics than a composite signal, surely the correct argument would be to provide external composite/S-video encoder? Also not sure how an external RGB device would connect and give better image if the pure RGB wasn't there.. and if it was there, this would just be a SCART lead.

 

I'd imagine RGB is quite common in Japan too, so that's probably the majority of the customer base having RGB capable equipment, vs it seems the US who are stuck with S-Video/Composite? Also wasn't the chipset designed/built by a French or Scottish Company ?

  • Like 4
Link to comment
Share on other sites

Plus, as it is, there are ways to get fairly fast texture mapping routines working on the Jaguar. (like rendering to a buffer in GPU SRAM and then copying that "tile" out to the framebuffer when it's complete, or going a step further and caching several pixels of a texture in GPU RAM and rendering to the GPU buffer from there) That will also reduce bandwidth consumed in main memory (64-bit and FPM accesses are used rather than slow single-pixel DRAM reads/writes, and more time spent working in SRAM) so the OPL, DSP, and 68k have more bus time.

Those routines should actually be faster than the 3DO, though probably slower than the Saturn. (definitely slower for actual polygon rendering -more set-up time and overhead than the Saturn's hardware quad renderer- though for rotated objects it might be closer)

Well, why don't you show us a demo if this is so easy to do ? :)

 

RGB/Component is far more important once you get to higher resolutions, but not nearly as big a difference from the common ~320x240 res stuff. (generally a 5-7 MHz dot clock )
I disagree. Have you tried RGB vs S-video side-by-side ? RGB is definitely sharper, and doesn't suffer from the lower chroma resolution which makes blue text unreadable, for example. It's even more noticeable if you compare it against composite (no dot-crawl, much better horizontal resolution), not to mention RF. Yes, RGB really shines for high-resolution stuff, but makes a noticeable difference for 320x240 graphics too.

 

hang on, whats CRY?? I have a Jaguar scart lead and as far as I can tell its RGB as it has all the pins instead of just 5 or 6, is there a better, clearer way of displaying the consoles out put on a tv?
No, it's an internal color format - it has nothing to do with how you connect the Jaguar to the display.
Link to comment
Share on other sites

 

hang on, whats CRY?? I have a Jaguar scart lead and as far as I can tell its RGB as it has all the pins instead of just 5 or 6, is there a better, clearer way of displaying the consoles out put on a tv?
No, it's an internal color format - it has nothing to do with how you connect the Jaguar to the display.

 

ok so its not something you can tinker with to improve the picture quality then, shame

Link to comment
Share on other sites

I'd assume providing RGB direct from the hardware as seems pretty common on all computer hardware of the time direct to a display device is a cheaper solution and requiring less electronics than a composite signal, surely the correct argument would be to provide external composite/S-video encoder? Also not sure how an external RGB device would connect and give better image if the pure RGB wasn't there.. and if it was there, this would just be a SCART lead.

It's technically true that that should be simpler due to color CRTs natively using analog RGB, but there's the issue of providing proper buffering and support for composite sync for RGB, and the main issue of having a market of TVs full of cheaply mass produced composite and RF decoding circuitry.

 

Composite monitors were cheaper than RGB monitors for computers (CGA PCs, Tandy, Amiga, etc had composite for low-end models in spite of using RGB natively -albeit CGA/EGA required special monitors for RGB since they didn't use normal analog RGB+sync, but RGBI or 2-2-2 RGB on the cables -with the monitors having to decode that digital RGB signal; VGA used standard analog RGB+H/V sync).

Either that, or computer manufacturers were just trying to cheat customers with higher mark-ups on RGB monitors (albeit there's also the issue of actual high quality RGB monitors with finer dot pitch and greater beam precision vs what was common on composite monitors -or some of the cheaper ST/Amiga monitors- . . . not to mention high-res monitors supporting higher sync rates -or multi-sync monitors -Except Atari had the 30 kHz 70 Hz B/W monitor for less than the standard TV resolution RGB monitors . . . and didn't cheaper offer grayscale low-res RGB monitors AFIK)

 

I'd imagine RGB is quite common in Japan too, so that's probably the majority of the customer base having RGB capable equipment, vs it seems the US who are stuck with S-Video/Composite? Also wasn't the chipset designed/built by a French or Scottish Company ?

US went to component in the late 90s (due to DVD), though it honestly would have made more sense to use RGB. (especially since many DVD decoders used analog RGB anyway -so it had to be transcoded to YPbPr anyway, and RGB was the standard for multimedia monitors and computers for well over a decade prior to that)

Prior to that, composite was common, but S-video was fairly niche. (it wasn't uncommon for mid-range TVs to have s-video, but there was relatively little you could use it for . . . some game systems, SVHS, high-end laser disc players, VCD players, and video out for some PC video cards -or adapter cards for VGA to composite/s-video -the latter actually available as really small/simplistic analog adapters on 8-bit ISA cards using only the gnd/power lines from the ISA port)

 

 

Plus, as it is, there are ways to get fairly fast texture mapping routines working on the Jaguar. (like rendering to a buffer in GPU SRAM and then copying that "tile" out to the framebuffer when it's complete, or going a step further and caching several pixels of a texture in GPU RAM and rendering to the GPU buffer from there) That will also reduce bandwidth consumed in main memory (64-bit and FPM accesses are used rather than slow single-pixel DRAM reads/writes, and more time spent working in SRAM) so the OPL, DSP, and 68k have more bus time.

Those routines should actually be faster than the 3DO, though probably slower than the Saturn. (definitely slower for actual polygon rendering -more set-up time and overhead than the Saturn's hardware quad renderer- though for rotated objects it might be closer)

Well, why don't you show us a demo if this is so easy to do ? :)

Why not ask crazyace, he's the one who mentioned that route to me. :P

I disagree. Have you tried RGB vs S-video side-by-side ? RGB is definitely sharper, and doesn't suffer from the lower chroma resolution which makes blue text unreadable, for example. It's even more noticeable if you compare it against composite (no dot-crawl, much better horizontal resolution), not to mention RF. Yes, RGB really shines for high-resolution stuff, but makes a noticeable difference for 320x240 graphics too.

 

It's TV and video encoder dependent, some cases are definitely closer than others. Obviously, RGB becomes moreattractive at higher resolutions.

That's also not to say that RGB (or component) can't also have significant artifacts due to buffering or sync issues. (an extremely common problem on the Genesis/MD is very visible "jailbar" artifacts in RGB -actually more of an issue than the S-video output from the CXA1145 or CXA1645 from what I've seen)

 

And obviously it's better compared to composite, though that can also vary massively depending on the encoder and TV used. (the SNES2's composite output is nearly as clean and sharp as S-video on an SNES or modded SNES2 -N64 composite is also exceptionally clean for lower resolutions . . . on TVs with good composite decoding at least)

 

That's not counting the horrible problem with some SCART TVs that can't properly differentiate between composite and RGB modes (ie devices that use composite video as the sync signal for RGB can get really screwy -namely TVs trying to display composite and RGB at the same time). That's a problem with both HDTVs and some late model SDTVs.

Link to comment
Share on other sites

rendering to a buffer in GPU SRAM and then copying that "tile" out to the framebuffer when it's complete, or going a step further and caching several pixels of a texture in GPU RAM and rendering to the GPU buffer from there) That will also reduce bandwidth consumed in main memory (64-bit and FPM accesses are used rather than slow single-pixel DRAM reads/writes, and more time spent working in SRAM) so the OPL, DSP, and 68k have more bus time.

Well, why don't you show us a demo if this is so easy to do ? :)

Why not ask crazyace, he's the one who mentioned that route to me. :P

Why program it when you can just ask somebody who tried? ;)

 

Zero is correctly pointing out that the Jaguar is much harder to program than it seems. Reading the technical references, you can see all kinds of possibilities from 10,000 feet. When you actually try to program it, up close, you get to see every ugly wart.

 

Sure, texture mapping with GPU SRAM is possible, and in a few cases it improves performance. However, there are tons of warts in this approach that you're not seeing from up there in the clouds!

 

First off, the blitter does not have an independent bus - it shares the same bus as everybody else. Just because the blitter is using the bus to access on-chip RAM doesn't mean the Jaguar gets a second bus to main DRAM for the OPL, 68K, and DSP. It's one bus, and while the blitter runs, the OPL and 68K are frozen, and the DSP can only use its own on-chip RAM.

 

(Side note: You can set it so the DSP, OPL, and 68K can stop the blitter to "share the bus", but there is no performance gain versus executing things serially, due to no overlapping execution. At best you break even, typically it's much slower than serial, and there are ugly bugs you will hit. It's a dumb idea.)

 

However, the peak numbers still look better: Texturing DRAM to DRAM costs 11 cycles/pixel. Texturing SRAM->DRAM, DRAM->SRAM, or SRAM->SRAM, each cost as little as 5 cycles/pixel.

 

The next "wart": Blitting to or from SRAM is very bad for GPU performance. This is obvious when you think about it - the GPU has a dedicated bus to SRAM, which is the secret to its performance. But, you are using that bus for 2 of 5 cycles, during which the GPU is frozen. (No, there is no way to avoid the freeze, it is hardwired regardless of what the GPU is doing that cycle.) Your focus on blitter performance ignores the fact that the GPU is actually the main bottleneck in all types of polygon games on the Jaguar. The GPU must do all transform, lighting, triangle setup, rasterization, and even drive the blitter line by line. It takes genius level coding to keep the blitter running 50% of the time while drawing 200 textured polys per frame.

 

Let's look at SRAM->SRAM first. First off, SRAM->SRAM is always worse than DRAM->SRAM. The raw performance for both is the same but remember - there is no "bus sharing" benefit for SRAM->SRAM. The main bus is locked on every cycle while the blitter runs. Worse, SRAM->SRAM has an additional expensive copy operation to get the texture info into SRAM. Finally, it is twice as bad for GPU performance, which is already in the critical path. So SRAM->SRAM is pointless and DRAM->SRAM is always the better option.

 

How about DRAM->SRAM? Sure, it cuts GPU performance down to 60% (2/5) which is really horrible for polygons smaller than 50 pixels wide, but let's say all you have is 3 HUGE polygons because you're doing a rotating cube demo or something. Maybe it's still worth it.

 

The first big problem with DRAM->SRAM is that you can't texture map 16-bit pixels to SRAM. Yes, GPU SRAM is only 32-bit wide, and if you attempt to blit a 16-bit pixel, it doesn't work. (It would work okay for gouraud shading, since that blits 64-bits at a time, but by the way, gouraud shading is SLOWER in SRAM than it is in DRAM, so this is dumb too.)

 

So the only way to go DRAM->SRAM with texture mapping is to use 32-bit pixels. Okay, we're still committed to this idea. So now we have to copy the data out of SRAM back into DRAM after the operation. We can use a 64-bit blit for this. This takes 7 cycles for 2 pixels (4 reading from SRAM, 1 turnaround, 2 writing to DRAM). Of course, while running this, the GPU is slowed to 40%!

 

Don't forget, the framebuffer bandwidth is doubled because we had to use 32-bit pixels. Let's say we're running at 320x240 at 60Hz. 32-bit pixels at this resolution waste a lot of precious RAM, but they also reduce the overall Jaguar performance to 88% of the target speed. (Don't try to blitter your way out of this, all other approaches are slower.)

 

Final score: DRAM->SRAM takes 8.5 cycles/pixel for texture mapping instead of 11, but because overall performance is 88%, that's effectively 9.6 cycles versus 11 yielding only a 13% boost in theoretical speed! The downside? GPU performance is cut in half (8 of 17 cycles spent in GPU SRAM). And you waste 300KB of precious memory. Trust me, you needed that GPU speed and memory more than that 13%.

 

How about SRAM->DRAM then? Well, it has pretty much all of the above problems, but they are much less serious. If you have a really low res texture (must be under 32x32) spread across a really huge polygon, it's a definite win: You only have to copy the texture data once, but you can spread those pixels all over DRAM. Also, the 32-bit pixel problem occurs, but it's less serious -- you have to waste half of SRAM (giving you 2KB instead of 4KB), and can still blit to 16-bit pixels, so there is no framebuffer penalty either.

 

Final score: SRAM->DRAM can improve texturing performance by around 50% -- for example, boost 10FPS to 15FPS, but only if you are willing to make an ugly game with very few large polygons and very low resolution textures, typically 32x16. The reason you can only afford a few polygons is that GPU is slowed to 60%, and you really needed all 4KB of SRAM for a fast GPU rasterizer.

 

But, unlike the other approaches we've discussed, SRAM->DRAM texturing was actually an option proposed by the Jaguar engineering team. But I don't think any released games used it, since it was so limited. They proposed the GPU be frozen entirely during each blit, which is far more pessimistic than my suggestion. They probably knew about bugs preventing my "optimistic" numbers from working.

 

Before you get too excited about SRAM->DRAM, think about the consequences of these tiny textures: You will either have a texture so chunky you may as well drop resolution (like Doom did), or a small texture tiled so many times it will barely look more interesting than gouraud shading. Great looking games like AvP look great because they have high res textures.

 

If you've read and understood this post, you now have a sneak peak of what it takes to make up back-of-the-envelope performance calculations for the Jaguar. Keep it up and you might be writing some GPU code to test your theories soon. ;)

 

- KS

Edited by kskunk
  • Like 4
Link to comment
Share on other sites

It's nice that you can blit 16 bit to the CLUT ram though :)

Sure, but CLUT RAM is also the slowest RAM on TOM, so you probably don't want to blit TO it. It takes 8 cycles to read back one 64-bit phrase from CLUT, which kills any benefit. The cycle counts work out about the same for SRAM->CLUT->DRAM as DRAM->DRAM, except with the CLUT you waste even more time setting up the extra blits. Blitter setup is already so expensive that it is a major bottleneck with small polygons, and SRAM->CLUT->DRAM means you have to change the blitter mode twice on each line of each polygon. You also lose even more cycles since you can't cache part of the blitter setup. Ouch!

 

Of course, you can target the CLUT a different way if you plan to texture at 60FPS, but if that's the plan, rendering to the line buffers is faster -- you save the OP processor overhead that way.

 

I wrote a 60FPS "texture mapper" demo for the Jaguar when I first got my BJL, and it's fine if you only need a couple of polygons and think Doom's horizontal resolution is sharp enough. Oh, yeah, hope you didn't need any time left for an actual game. ;)

 

Anyway, blitting FROM the CLUT isn't the worst idea if you don't mind 16x16 textures, but I think low res textures are kind of pointless. I wouldn't be mind being proven wrong.

 

- KS

Edited by kskunk
  • Like 2
Link to comment
Share on other sites

Being that I don't program, I've read so much Jag tech stuff, I understood mostly everything despite not knowing all of the details. All technical jargons aside... Whenever I see any kind of image that's only 16x16 in size, it always reminds me of how SNK saved their background images and sprites... Of coures their image sizes could go up to 512 pixels. A 16x16 image is less then 1kb (16bit) in memory size. I've always pictured texturemap images streaming from a cartridge 16x16 or smaller on the Jag; from what I understand, streaming texturemaps from a cartridge would cause slower frame rates considering the size of the image. Wouldn't it be feasible for texturmapping to stream in small chunks that's 16x16 in size similar to how SNK would break their background images up into chunks and saved them as sprites...? Not that I'm suggesting that it be saved as sprites, but rather a series of small 1k (or less) images streaming from a cartridge in order to make up a whole texturemap, or would it be too slow for the GPU / OP to alter and display and still have a high frame rate 60 or 30fps.

Link to comment
Share on other sites

Being that I don't program, I've read so much Jag tech stuff, I understood mostly everything despite not knowing all of the details. All technical jargons aside... Whenever I see any kind of image that's only 16x16 in size, it always reminds me of how SNK saved their background images and sprites... Of coures their image sizes could go up to 512 pixels. A 16x16 image is less then 1kb (16bit) in memory size. I've always pictured texturemap images streaming from a cartridge 16x16 or smaller on the Jag; from what I understand, streaming texturemaps from a cartridge would cause slower frame rates considering the size of the image. Wouldn't it be feasible for texturmapping to stream in small chunks that's 16x16 in size similar to how SNK would break their background images up into chunks and saved them as sprites...? Not that I'm suggesting that it be saved as sprites, but rather a series of small 1k (or less) images streaming from a cartridge in order to make up a whole texturemap, or would it be too slow for the GPU / OP to alter and display and still have a high frame rate 60 or 30fps.

 

I'd hazard a guess that you didn't understand as much as you thought you did. Why not break them up into 1x1 images for 256 TIMES MOAR POWA?! You could maybe render that at 240 or 120fps? ;-)

Link to comment
Share on other sites

It's nice that you can blit 16 bit to the CLUT ram though :)

Sure, but CLUT RAM is also the slowest RAM on TOM, so you probably don't want to blit TO it. It takes 8 cycles to read back one 64-bit phrase from CLUT, which kills any benefit. The cycle counts work out about the same for SRAM->CLUT->DRAM as DRAM->DRAM, except with the CLUT you waste even more time setting up the extra blits. Blitter setup is already so expensive that it is a major bottleneck with small polygons, and SRAM->CLUT->DRAM means you have to change the blitter mode twice on each line of each polygon. You also lose even more cycles since you can't cache part of the blitter setup. Ouch!

 

Of course, you can target the CLUT a different way if you plan to texture at 60FPS, but if that's the plan, rendering to the line buffers is faster -- you save the OP processor overhead that way.

 

I wrote a 60FPS "texture mapper" demo for the Jaguar when I first got my BJL, and it's fine if you only need a couple of polygons and think Doom's horizontal resolution is sharp enough. Oh, yeah, hope you didn't need any time left for an actual game. ;)

 

Anyway, blitting FROM the CLUT isn't the worst idea if you don't mind 16x16 textures, but I think low res textures are kind of pointless. I wouldn't be mind being proven wrong.

 

- KS

 

Interesting results - When I tested in code originally it was a definite win to use the CLUT, but maybe that's the difference between a demo and game code :) - ( and this was many many years ago )

 

( I should turn on the jaguar and try texturemapping again - but I'm having enough trouble just trying to find time to continue working on gamecode , the poor jaguar is gathering a lot of dust )

Link to comment
Share on other sites

( I should turn on the jaguar and try texturemapping again - but I'm having enough trouble just trying to find time to continue working on gamecode , the poor jaguar is gathering a lot of dust )

Yes you should. :D Every time I'm pretty sure I know how the Jaguar works I'm surprised by the results when I try it. Maybe there's something I'm missing in my accounting for cycles.

 

Thinking about this some more, you could also use one of the two line buffers in place of the CLUT as a temporary storage. The performance benefit is bigger when going LB->DRAM than DRAM->LB, because loading up the texture in the LB can use the 32-bit path that isn't available in CLUT RAM.

 

As long as you're using the OP under 20% of the time (easy for a framebuffer game), you can free up one of the line buffers by just filling the same, single, line buffer during each horizontal blank. There is a register that lets you trick the OP into doing this.

 

- KS

Link to comment
Share on other sites

One other point about 3D polygon games on the Jaguar:

 

It's very hard to get high poly counts on the Jag. Although the Jag has plenty of problems with texture mapping performance, having tried a texture mapping engine, I don't really see this as its biggest performance bottleneck.

 

Playstation games used pure gouraud shading all the time, and few people noticed because the poly counts were so high. For example, all the characters and enemies in Crash Bandicoot were gouraud shaded - Crash himself was 512 gouraud shaded polygons and often took up 20% of the screen! The Mario-like open world game "Spyro" used gouraud shading on huge swaths of the background and the majority of enemies and collectible objects.

 

This was done for performance reasons, but they could get away with it because a polygon that only takes up 20 pixels on the screen doesn't need a lot of texture. Polygons this small were normal in the top Playstation games.

 

The Playstation could deal with this because every step of creating a polygon, from transform and lighting to setup and rasterization, was optimized with special hardware. On the Jaguar, they left all of that up to the GPU programmer.

 

You could do amazing games with 3,000 gouraud shaded polygons per frame. But the more polygons I try to pack in, the more I run into all these stupid bottlenecks: Blitter setup time, GPU rasterizing time, triangle setup time (NOT cheap with those divides), memory bottlenecks in transform and lighting...

 

For me, the main annoyance of texture mapping isn't even the slow blitter anymore, it's all the extra texture address crap that has to flow through the polygon pipeline from transform to blitter setup. Even the texture addresses are setup in the blitter in this unnatural order that doesn't fit how you compute them in the GPU, wasting another 10 cycles per texture line... It would have been free to wire the registers up correctly, and they did in the Jaguar 2, but in the Jaguar they just had no idea what a 3D engine would look like, so they made a lot of basic mistakes.

 

 

If the blitter finished my textured spans twice as fast, I wouldn't have twice the framerate. I'd just spend more time waiting for the GPU to catch up.

I wish Flare had a single 3D programmer writing an engine under the same roof as the hardware designers. That would have made a much bigger difference than color spaces. From my understanding of history, the first Jaguar 3D engine (in Cybermorph) didn't get started until after all of the hardware decisions were set in stone. At which point there must have been a lot of "oh well" discoveries.

 

The details of a high poly count on the Jaguar are just confounding.

 

- KS

Edited by kskunk
  • Like 2
Link to comment
Share on other sites

Being that I don't program, I've read so much Jag tech stuff, I understood mostly everything despite not knowing all of the details. All technical jargons aside... Whenever I see any kind of image that's only 16x16 in size, it always reminds me of how SNK saved their background images and sprites... Of coures their image sizes could go up to 512 pixels. A 16x16 image is less then 1kb (16bit) in memory size. I've always pictured texturemap images streaming from a cartridge 16x16 or smaller on the Jag; from what I understand, streaming texturemaps from a cartridge would cause slower frame rates considering the size of the image. Wouldn't it be feasible for texturmapping to stream in small chunks that's 16x16 in size similar to how SNK would break their background images up into chunks and saved them as sprites...? Not that I'm suggesting that it be saved as sprites, but rather a series of small 1k (or less) images streaming from a cartridge in order to make up a whole texturemap, or would it be too slow for the GPU / OP to alter and display and still have a high frame rate 60 or 30fps.

 

I'd hazard a guess that you didn't understand as much as you thought you did. Why not break them up into 1x1 images for 256 TIMES MOAR POWA?! You could maybe render that at 240 or 120fps? ;-)

 

Uhhmm... OK... I know about you, but I don't know you? I mean the 2.5D stuff was great with Doom 2 port and all so a more constructive answer would've sufficed, but whatever...? Moving on... :ponder:

Edited by philipj
Link to comment
Share on other sites

Being that I don't program, I've read so much Jag tech stuff, I understood mostly everything despite not knowing all of the details. All technical jargons aside... Whenever I see any kind of image that's only 16x16 in size, it always reminds me of how SNK saved their background images and sprites... Of coures their image sizes could go up to 512 pixels. A 16x16 image is less then 1kb (16bit) in memory size. I've always pictured texturemap images streaming from a cartridge 16x16 or smaller on the Jag; from what I understand, streaming texturemaps from a cartridge would cause slower frame rates considering the size of the image. Wouldn't it be feasible for texturmapping to stream in small chunks that's 16x16 in size similar to how SNK would break their background images up into chunks and saved them as sprites...? Not that I'm suggesting that it be saved as sprites, but rather a series of small 1k (or less) images streaming from a cartridge in order to make up a whole texturemap, or would it be too slow for the GPU / OP to alter and display and still have a high frame rate 60 or 30fps.

Except for a few problems with that comparison:

Kskunk is talking about using a single 16x16 (or 32x32 or 16x32) texture stretched over a large polygon (or possibly tiled/repeated many times over that area). Using many different texture tile block would require updates to the buffer holding the texture (be it CLUT RAM or the GPU scratchpad) . The Neo Geo doesn't do that.

 

Also, Neo Geo games usually don't build things with 16x16 sprites at all since that would quickly exhaust the 384 sprite limit (it would take 266 16x16 sprites to fill a 304x224 area). So to make the most of the Neo Geo, large BG planes use very large sprites, but even then you'd be limited by the 16-pixel width and 96 sprite/line limit. (so enough to overlap 5 solid BG layers, but no band width left for sprites) Due to using larger sprites to build the BG vs 8x8 tiles, there's also less flexibility with use of palettes (each sprite selects 1 of 256 15-color palettes indexed from 15-bit RGB rather than each 8x8 cell selecting a palette for arcade systems or consoles with 8x8 character map layers, but the Neo Geo also has a huge selection of subpalettes to work with . . . but some contemporary arcade boards come close to that too -I know several of sega's had 128 subpalettes to the Neo's 256)

 

 

 

 

 

 

 

 

 

 

 

The next "wart": Blitting to or from SRAM is very bad for GPU performance. This is obvious when you think about it - the GPU has a dedicated bus to SRAM, which is the secret to its performance. But, you are using that bus for 2 of 5 cycles, during which the GPU is frozen. (No, there is no way to avoid the freeze, it is hardwired regardless of what the GPU is doing that cycle.) Your focus on blitter performance ignores the fact that the GPU is actually the main bottleneck in all types of polygon games on the Jaguar. The GPU must do all transform, lighting, triangle setup, rasterization, and even drive the blitter line by line. It takes genius level coding to keep the blitter running 50% of the time while drawing 200 textured polys per frame.

That would still be good for fast texture mapping in non-polygonal games that need limited GPU use. Like drawing lots of rotated/warped sprites or mode-7 style effects. (or maybe for a doom-like games, though GPU overhead for ray-casting may be an issue there too over texture mapping speed -you'd also need a 2-pass renderer for columns to draw lines of affine texture and then rotate to column orientation, and a single pass to fill lines for the floor and ceiling -the alternative being GPU rendered columns)

 

 

However, the peak numbers still look better: Texturing DRAM to DRAM costs 11 cycles/pixel. Texturing SRAM->DRAM, DRAM->SRAM, or SRAM->SRAM, each cost as little as 5 cycles/pixel.

Yeah, which would be much more useful with the blitter treated GPU SRAM as a separate bus (like the GPU does -and allow it to be accessed as 16-bit words), let alone if the blitter had a dedicated buffer to work in (or if GPU SRAM could be split as a blitter buffer -on a separate bus).

Then again, that's almost certainly more effort (or silicon) than it's worth compared to just beefing up the Jag's texture mapping logic. (even just a 64-bit destination buffer would have been a big deal -allowing 16 cycles for 4 pixels in a best case . . . let alone more serious buffering for texture mapping . . . or hardware gemoetric primitive rasterization, the latter probably being a bigger deal in general and something more clearly needed than Texture mapping from Flare's 1990 perspective . . . and should have been a higher priority than than Z-buffering for that matter . . . I can understand the emphasis on Z-buffering over texture mapping, but over polygon rasterization logic? -even partial assistance like the JagII's trapezoid rasterization)

 

Sure, CPU based line-by-line rasterization was the standard for the time in computer (and some limited 3D on consoles), but that's because it was being done on systems that used CPU grunt for nearly anything (little to no hardware acceleration for 3D and often 2D as well), so obviously not the model of the Jaguar's design. (PC and 32x 3D games obviously had to deal with line by line set-up/rasterization like the Jag, but also had to do all drawing via the CPU on top of 3D calculations -well, the 32x had hardware line fill to use for solid shading polygons . . . and VGA had some similar basic support like that)

 

 

Or in the context of "last minute" hacks/tweaks for performance demands that were apparent by late 1992, something like an SRAM buffer in main might make some sense. (for use as an external texture cache/buffer in the 2nd RAM bank -so separate source and destination, and no page change penalties) Though for the cost of SRAM, it would probably make much more sense to add DRAM to that bank instead -especially since you'd get FPM accesses much of the time anyway. (and the Jag is already set-up to interface a 2nd DRAM bank, plus having the DSP and 68k work in that bank could also reduce overhead as well -and all of those only need a 16-bit bus, so a single 16-bit wide 128k or 512k chip would be fine . . . aside from being slower to load/page between 64-bit DRAM -had they planned on using the 2nd bank earlier on, it also would have made sense to allow ST/Amiga style interleaving for DSP/68k accesses and FPM GPU/blitter/OPL accesses to the other bank to moderately increase bus sharing efficiency)

 

How about SRAM->DRAM then? Well, it has pretty much all of the above problems, but they are much less serious. If you have a really low res texture (must be under 32x32) spread across a really huge polygon, it's a definite win: You only have to copy the texture data once, but you can spread those pixels all over DRAM. Also, the 32-bit pixel problem occurs, but it's less serious -- you have to waste half of SRAM (giving you 2KB instead of 4KB), and can still blit to 16-bit pixels, so there is no framebuffer penalty either.

And that's only good for a low-res texture stretched over a large area or tiled/repeated many times (like many N64 games did), not for caching/buffering new texture after new texture and tiling them as a higher-res composite texture.

 

 

 

On another note, one really huge area that most games missed out on is just rendering at a low resolution. It can be a bit ugly: but a low res 3D/pseudo 3D game rendered in a decent window/screen size at a decent framerate (with fairly good textures/shading/polygon counts -just tons of coarse jaggies) could work relatively well. (and in ray-casting games, cutting horizontal res obviously has a greater impact than vertical -fewer rays to cast; a critical trade-off made in Jaguar Doom that should have been made in AvP . . . and obviously the 3DO version of Doom -and was done on the SNES and 32x, and obviously optionally on PC)

 

And unlike the 32x or PC games, the jaguar actually supports pretty low native resolutions (dot clock wise), so you wouldn't have to use a small screen size or waste bandwidth scaling/doubling pixels. (well, 32x can control vertical resolution via the line table, and that's what Yeti 3D does, but the horizontal res is stuck at 320 pixels, so it ends up writing double wide pixels 160x112 to 320x112 -line doubled via the VDP to 320x224- . . . the earlier builds also did interpolation of 160x112 to 320x112, but the latest one is unfiltered and runs at almost double the framerate -and honestly looks better without the blur IMO)

 

For some games that manage high framerates at high resolutions and already look fairly good style-wise, it obviously wouldn't make sense. But for games that use unattractively bland (untextextured solid shaded, low polygon count) models, or more detailed games at poor framerates, using low res could have really helped.

Plus, you can leave a high res boarder (above and/or below the low res lines on-screen) for status/inventory/etc . . . or directly overlay high-res objects and have the OPL up-scale the low-res window (more OPL bandwidth used, but still less GPU/blitter overhead), and obviously limit scaling to integer values. (ie 2x or 3x width/height, avoiding scaling artifacts from fractional values)

Edited by kool kitty89
  • Like 1
Link to comment
Share on other sites

I can understand the emphasis on Z-buffering over texture mapping, but over polygon rasterization logic? -even partial assistance like the JagII's trapezoid rasterization

The manual states their thinking clearly: "The GPU can handle rasterization." And it can. It's just inefficient with small polygons. They may have thought a few large polygons would be enough, with sprites on top. They didn't have a lot of examples to draw from.

 

Proper rasterization is complicated. There are tons of edge cases involved with creating seamless edges between adjoining polygons as they split and join, and grow thinner or shorter than one pixel. The 3DO mishandles seams and the Saturn famously picked the wrong algorithm. Leaving it to the GPU is a better idea than screwing it up!

 

I get the feeling they didn't think too deeply about 3D. They tried to put in some nice ingredients in hopes developers could use them to bake up a 3D pipeline, but they didn't really think it all through.

 

Don't forget that the most optimized, highest bandwidth, most deeply buffered part of Tom is the 2D engine. The OPL can sustain 2 pixels per cycle, and it achieves that easily. It has huge dedicated SRAM buffers, several independent buses, and deep pipelines to make that possible. The OPL has dedicated, fixed-function, hardware to DMA and parse the object list, and "rasterize" sprites, including clipping and scaling edges.

 

The blitter is nowhere near that elaborate. It's just a little state machine. You can see where their priorities were. The GPU was quite deliberately intended to add all the "missing" stuff to the blitter. You can see it in how their buses connect, and they say as much in the docs.

 

Or in the context of "last minute" hacks/tweaks for performance demands that were apparent by late 1992... and the Jag is already set-up to interface a 2nd DRAM bank

Yes, increasing the on-console RAM by 20% with a 5th DRAM chip, would allow DRAM->DRAM texture mapping in 5 cycles. You'd have 512KB of dedicated texture RAM in this design, which is probably enough. (You wouldn't want to use it for frame buffer because reading is too slow.) Of course, it would cost $10, and the Jaguar was over its cost budget already.

 

By the way, CoJag can do DRAM->DRAM texture mapping in 5 cycles. This is the fastest way to texture map on the Jaguar, since there is no extra overhead in copying stuff in or out of SRAM.

 

You could also just use faster ROM in the cartridge. For another couple of dollars, your cart could use 200ns ROMs, allowing ROM->DRAM texture mapping in 8 cycles, which is as fast as all of the fastest approaches we've discussed above once you include the copy in/out overhead.

 

Not to toot my own horn, but the Skunkboard can do texture mapping from ROM->DRAM in 8 cycles as well. This is enabled by default. Try it out with your own code!

 

It's also technically possible to put a small PSRAM in the cartridge itself, allowing 5 cycle PSRAM->DRAM texturing. Talk about an echo of the old 2600 days! (Sadly, there is no way to put a DRAM in, but 128KB PSRAMs were under $3 in the Jaguar's prime days.) 128KB is sufficient for scenes in AvP or Doom without too much costly texture swapping behind the scenes.

 

- KS0

Edited by kskunk
  • Like 4
Link to comment
Share on other sites

Proper rasterization is complicated. There are tons of edge cases involved with creating seamless edges between adjoining polygons as they split and join, and grow thinner or shorter than one pixel. The 3DO mishandles seams and the Saturn famously picked the wrong algorithm. Leaving it to the GPU is a better idea than screwing it up!

Yeah . . . not just the rasterization algorithm to worry about either, but the whole quads vs triangles issue. (granted, what the Jag 2 did was still neither of those -not full rasterization either, but warped trapezoids you could manipulate to form triangles, right?)

 

The PSX had a lot of seaming too due to rounding errors in the GTE iirc. (actually more noticeable than seaming on the Saturn -Tomb Raider is a prime example- though the Saturn had other problems . . . )

 

I get the feeling they didn't think too deeply about 3D. They tried to put in some nice ingredients in hopes developers could use them to bake up a 3D pipeline, but they didn't really think it all through
.

Leaving flexibility in the system is more fool proof in some respects for sure . . . more so with the various pseudo-3D engines that were popular in the early/mid 90s on PCs (ie Doom and Comanche style height maps).

And that's a bit up to luck too, since Flare obviously wasn't aiming at supporting those sorts of engines (especially voxels -though corridor/maze style ray-casting renderers may have been considered).

 

The Z-buffering thing still bothers me though . . . with the amount of work that went into that, what else could they have pushed. Texture mapping was hard to judge, granted (especially since they already had fast scaling/zooming with the object processor -so affine rendering was only needed for warping and rotation effects), but there's other areas that seem like they would have been major concerns at the time too . . . like a dynamic instruction (possibly data) cache for the GPU, a command cache (or double buffered blitter registers) to help pipeline blitter operation, and/or keeping the overall design simpler (ie similar but no Z buffer -either saving some silicon or perhaps designing the chip with a larger scratchpad from the start) and focusing on trouble shooting more prominently. (albeit some things probably couldn't be helped without more time or more funding for aggressive testing . . . just putting more engineering time into it could only go so far -though I'd think omitting Z-buffering logic would at least help avoid bugs/problems to some extent)

 

Don't forget that the most optimized, highest bandwidth, most deeply buffered part of Tom is the 2D engine. The OPL can sustain 2 pixels per cycle, and it achieves that easily. It has huge dedicated SRAM buffers, several independent buses, and deep pipelines to make that possible. The OPL has dedicated, fixed-function, hardware to DMA and parse the object list, and "rasterize" sprites, including clipping and scaling edges.

 

The blitter is nowhere near that elaborate. It's just a little state machine. You can see where their priorities were. The GPU was quite deliberately intended to add all the "missing" stuff to the blitter. You can see it in how their buses connect, and they say as much in the docs.

This is another thing I've though about but never commented much on . . . the blitter was part of the 2D engine too, but far less optimized than the OPL (obviously the brunt of 2D was intended to be handled with OPL sprites -similar to Panther).

 

What I wonder is how things might have ended up if Flare kept more along the lines of an evolutionary development of the Slipstream (or Lynx) with a pure blitter+framebuffer graphics engine, pouring all that buffering/optimization into blitter operation and leaving the OPL concept back with the Panther. And more so, what implications would that have for 3D, since the blitter largely handled 3D, and enhancing the 2D blitter operations would generally impact 3D performance as well. (and if that ended up beefing up the affine texture mapping logic, aside from 3D textures, it would mean silky smooth rotation and warping effects for 2D games -on top of scaling/zooming effects)

Granted, you'd still need a VDC to scan the framebuffer, and you'd want enough buffering on that to make efficient use of the bus. (but such buffering should have been a small fraction of what the OPL used)

 

The 3DO kind of ended up like that (graphics focused on a blitter), but far, far more conservative and less cost effective than the Jaguar. (the PSX took that route too, except with 3D first and 2D second -still, the GPU was pretty efficient at handling 2D drawing, and even had the fast "sprite" mode that disabled rotation/warping -only drawing zoomed/stretched rectangles . . . obviously a much more aggressive design than the 3DO, though with a far higher manufacturing cost ceiling than the Jaguar -so efficient design with lesser cost constraints on top of that making stuff like separate CPU/audio/video buses, 2 MB SDRAM and 1 MB SGRAM, etc practical)

 

 

Or in the context of "last minute" hacks/tweaks for performance demands that were apparent by late 1992... and the Jag is already set-up to interface a 2nd DRAM bank

Yes, increasing the on-console RAM by 20% with a 5th DRAM chip, would allow DRAM->DRAM texture mapping in 5 cycles. You'd have 512KB of dedicated texture RAM in this design, which is probably enough. (You wouldn't want to use it for frame buffer because reading is too slow.) Of course, it would cost $10, and the Jaguar was over its cost budget already.

A single 128k chip would have been $3-4 (plus the added cost to the motherboard), and whatever that inflated to at SRP . . . and then weigh the benefits that it could have to marketability in spite of added cost. (or they could have made other trade-offs to keep costs down, like reduce RAM to 1.5 MB to facilitate using both banks -ie 8 128kx8-bit chips for the 64-bit bank and 1 256kx16 for the 2nd bank . . . or -cheaper- 2 512k chips and 4 128k chips for 1MB at 32-bits and 512k at 64-bits -using a full 2MB with more chips to allow more banks would probably be more costly than the single 2M+128k or probably even 2M+512k if you tried something like 2 banks of 8 128kx8-bit DRAMs for 2 1MB 64-bit banks -higher cost of the RAM chips and lots more traces on the board)

 

And regardless to RAM or CPU modifications, you'd still be stuck with ROM carts and the inherent disadvantages and limitations -including unattractive nature to 3rd party developers with Atari having a hard enough time getting support . . .

Which brings up back to Crazyace's main argument from the 1993 Jag thread (and a recent thread on Sega-16) . . . adding CD-ROM.

That obviously would have been more costly than a bit more RAM -or a different CPU- but the pay-offs could be massive -from far lower cost and risk of software manufacturing for 1st and 3rd parties, reduced overhead for pack-ins, demo discs -or demo discs as pack-ins reducing lost profits, multimedia prospects, potential for budget/lower-end developers, facilitating ports of computer games designed around floppy or CD mass storage, etc)

Without investment spending to suppress the point (selling at or below cost), the price would obviously increase even with the cost saved from manufacturing CDs rather than carts (that would pay off more in the long run, but the few games released in '93 probably wouldn't be enough to make up the difference in cost) . . . then again, all it needed was a positive reception in the test market to secure funding. (after that, Atari had more investment capital to work with)

 

Except Atari had a target price point to meet, apparently regardless of how it may compromise the overall marketability of the console. (CD-ROM isn't that impressive from a raw performance or programming standpoint -which Gorf's criticism mirrored, but the impact it could have on consumers, critics, publishers, and general software production -1st or 3rd party- were extremely far reaching)

I'd suggest a floppy drive instead (for cheap mass storage), but by 1993, a bare-bones cheap-o chinese CD-ROM drive and off the shelf controller chipset was probably cheaper than a HD 3.5" floppy drive. (unless Atari had a sizable back stock of drives/controller ICs left over from ST production)

 

 

 

By the way, CoJag can do DRAM->DRAM texture mapping in 5 cycles. This is the fastest way to texture map on the Jaguar, since there is no extra overhead in copying stuff in or out of SRAM.

The cojag also uses VRAM, right? (and a 25 MHz 'EC020 or R3000 . . . though even with the VRAM and enhanced CPU it was probably nominally cheaper to manufacture than the PSX, let alone Saturn ;) -not counting economies of scale or vertical integration)

 

You could also just use faster ROM in the cartridge. For another couple of dollars, your cart could use 200ns ROMs, allowing ROM->DRAM texture mapping in 8 cycles, which is as fast as all of the fastest approaches we've discussed above once you include the copy in/out overhead.

Yes, but then you need uncompressed textures in ROM on top of faster/expensive ROM, further negating any practical use. (one of the major advantages of having all that RAM was heavily compressed games on cart . . . not as cheap as CDs, but still a lot better than having all that data uncompressed in ROM -which is pretty much what you had do do for 32x games with its limited RAM)

 

Not to toot my own horn, but the Skunkboard can do texture mapping from ROM->DRAM in 8 cycles as well. This is enabled by default. Try it out with your own code!

Yeah, in the context of modern homebrew, ROM (or flash) costs or speeds are non-issues . . . so you could push 75 ns ROMs with uncompressed textures. (assuming the Jaguar's ROM speed is programmable and not locked at 375 ns -I haven't gotten a straight answer on this, but it seems like it should be programmeable)

That's exactually whant 32x homebrew is doing . . . well, within the speed/capacity constraints of current flash carts. (and capacity is still an issue since the max normal configuration is 4 MB flat mapped or the 5 MB SSFII mapper configuration -that mapper was designed to support up to 32 MB, but current flash carts only support it configured as 5 MB -the Neo Myth also has a custom mode that maps 8 MB iirc)

 

I wonder if Atariowl's demos are expoiting this. (I know Chilly Willy's Yeti3D demo is on the 32x: 256 color uncompressed textures are sroted in ROM -though shaded/lit in 15-bit RGB . . . or it may be technically 12-bit RGB due to the soft-SIMD routine used)

 

It's also technically possible to put a small PSRAM in the cartridge itself, allowing 5 cycle PSRAM->DRAM texturing. Talk about an echo of the old 2600 days! (Sadly, there is no way to put a DRAM in, but 128KB PSRAMs were under $3 in the Jaguar's prime days.) 128KB is sufficient for scenes in AvP or Doom without too much costly texture swapping behind the scenes.

Putting a 128k DRAM inside the Jag would have been a much better investment. (avoiding game price inflation and/or profit loss onsoftware -not to mention greater risk/investment for manufacturing)

 

That, or if they really planned to add it a couple years after launch, maybe add a simple socket for a DRAM card/stick (in that case perhaps for 512k) and offer that at an affordable price (and pack-in with some games).

Though RAM expansion would be even more important for a CD based system . . . that's a thought too. (cutting some of the overhead of a built-in CD drive but reducing RAM capacity, but including a cost-effective/easy to use RAM expansion port -a hell of a lot more realistic to market a simple RAM module than a CD add-on ;) . . . not to mention cheaper/easier to build-in as standard on later models -especially from 1996 onward when RAM price stagnation ended) Plus, the expansion module could widen the width of the 2nd bank. (ie 512k at 16-bits could get a module with another 512k at 16-bits boosting that bank to 1 MB at 32-bits, or 1 MB at 16-bits could become 2 MB at 32-bits . . . or 512k at 16-bits could become 2 MB at 64-bits -except that would require 48 data lines on the RAM port rather than 16 and a 1.5 MB RAM card -so the 16=>32-bit examples make more sense)

 

 

 

. . . .

 

 

 

 

But for all of this tech talk on how the Jag could have been different or better . . . or even some of the less technical issues (like the inherent marketability and cost efficiency of CD-ROM), it all still would have been a bit moot due to Atari's position at the time.

They were pretty much screwed in the US, and their situation didn't foster particularly good management either (under those circumstances, they did pretty damn good though).

At best they might have been able to establish the Jaguar as a viable long-term platform in Europe (especially the UK) . . . and had they managed that, they might have gotten enough support to allow it a notable niche in the US, but all that would have been under ideal circumstances of good management making the best of hard times. (and timers were even harder than when Atari Corp was established in 1984 . . . except they had less debt -but also far less potential for the future)

 

As such, if one was to make hypothetical suppositions about missed potential or missed opportunities (or just hsitorical observations of the company and market), the time to really look at is 1989-1991. That was pretty much their last chance to make it big in the mass market . . . both the potential for Atari Corp games/entertainment and computers died in that period (and some would directly tied that to Sam Tramiel's management -though other factors played in as well, like the DRAM crisis of '88 andthe A500 matching the 520STFM that year . . . not to mention Michael Katz leaving in early 1989 -he was largely responsible for Atari being able to hang in there as a distant 2nd against Nintendo with around 2x the market share -sometimes more- over Sega in '86-89 . . . so much so that Sega offered exclusive licensing rights for the Mega Drive to Atari Corp in 1988 -which Katz favored, but Tramiel and Dave Rosen couldn't agree on the terms of, especially due to a conflict of interests in Europe since the contract was for North America only and Atari would thus be competing in Europe to some degree -I assume Jack Tramiel understood the ST's position in Europe enough to know that it was very significant in the games market, and obviously the 7800 was being brought over to Europe around that time as well)

 

So, yeah, lacking any follow up to the 7800 (a "16-bit" generation console) left Atari to largely fade from the US game market, especially with the poor performance (and arguably poor marketing/management) of the Lynx (really ironic that Katz left just a few months before Atari launched a machine brought by Epyx -ehere Katz had formerly been the president prior to joining Atari Corp in '85), and then the botched management of the computer line (general lack of a timely/efficient/competitive evolution of the hardware, issues with marketing and market positioning -in different respects in Europe and the US, etc, etc . . . and Atari had in-house alternatives being considered at the time)

 

Unlike 1985, 1989 had Atari Corp just past their 1988 peak, albeit had been shed a couple years prior, positive assets had been established, market share was significant in consoles and computers (the latter mainly in Europe), they had gone from a moderately sized start-up tech company that had taken on a large chunk of Atari Inc's old consumer division properties (and debt) to a successful, profitable multidivision entertainment/technology company in the fortune 500. They were in a position where significant investment spending was a realistic option to push into the mass market for games again, especially with at least decent hardware and good marketing and good software support -including fostering good relationships with licensed 3rd partypublishers. (like what Katz was capable of pulling off -which he did his best at with Atari's limited marketing budget for the 7800/2600/XEGS, but was allowed to really show that at Sega in 1990 ;))

 

Not to mention the potential to maintain/improve the ST line. (except both issues should have been worked on well before 1989 . . . a serious competitive successor/evolution of the ST should have been in the works from day 1 and a console definitely should have been in the works early enough to push for a 1989/1990 release -technically they could evenhave aimed at a convergent evolution of the ST and game system plans to make more efficent use of engineering resorces -not necessaily diretly repurposing ST hardware as a console, but perhaps designing some new custom hardware that could be used on a console as well as fit well to enhance the ST -even more so since gaming performance would be a substantial boon for the ST's marketability as well) Heh, if they'd managed to hang on at least astride of CBM's position in Europe, Atari potentially could have pulled ahead again as CBM collapsed under its own weight in '94. (granted, PCs would still be pushing in too . . . but that could have been far more gradual if the ST and Amiga hadn't failed ~1992-1994 -In much of Europe, PCs didn't get big for the masses until after Atari and CBM's platforms had died off on their own -makes you wonder if they could have retained a niche to this day in that region, sort of like Apple has but a bit of a different context)

 

 

Oh, and, of course, had Atari had a (reasonably) successful "16-bit" game console (not to mention better maintained success in computers), the Jaguar wouldn't have existed as it did in 1993. (they wouldn't have needed to rush a new system out so soon, so a longer, more mature design cycle could have been possible -maybe ending up with something between the Jaguar and jag II -not to mention the better software support facilitated by the existing market position)

 

 

Atari also never had a chance in Japan . . . even Atari Inc with the VCS for that matter, that is unless they licensed/partnered with a powerful domestic Japanese company for distribution. In the context of Atari Inc, that might have been Namco -at least had they maintained a good relationship after the Atari Japan buy-out . . . but in the context of Atari's desperate position with the Jaguar in 1993, there's some interesting thoughts that come to mind:

You had Bandi considering entry into the console market, and NEC being a bit confused with their next-gen entry (apparently getting spooked by Sony and rushing out somewhat dated hardware that had been shelved a couple years prior). With NEC's position and Atari having nothing to lose, there might have been some chance in negotiations there. (or perhaps even a trade . . . Atari had a better chipset than NEC at the time and NEC had powerful manufacturing resources, so perhaps something like free licensing of the Jaguar chipset in return for attractive deals on RAM or possible even a CD drive and/or an NEC CPU -especially since the Jag could accept varying RAM and CPU configurations fairly easily -Atari's success outside Japan would also be in NEC's best interests since it would mean more incentive for 3rd party software support for all regions -even more so if Atari sweetened the pot with royalties of sales in the US/Europe)

Edited by kool kitty89
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...