Jump to content
IGNORED

Why CRY?


kool kitty89

Recommended Posts

We are aware its never going to be a 3d engine like the PSX but just wondering out loud if the 'Blitter Trick' improved the Jags lot at all in throwing polygons to the screen.

 

BS/G never really throws around all that many polygons at once from what I've seen. I'm not sure it is doing anything particularly taxing or performing special or 'secret' tricks with the Jaguar hardware. I'm not basing that on any technical understanding of the hardware or the game, only the fact that it appears to run in Virtual Jaguar when other games (ones that are seen as being average or poor games, both technically and gameplay-wise) do not. Maybe that's not the best way to judge these things, but from using emulators of many systems in the past, it's usually the special-case games that abuse the hardware that are the last ones to be supported.

 

That is interesting, from the Battlesphere development blog they claim they found a hack for the blitter that improves its performance. But you say VJ can run it in emulation without even knowing what this 'blitter trick' involves. Hmm.

Link to comment
Share on other sites

it's usually the special-case games that abuse the hardware that are the last ones to be supported.

This is true, but it's usually not the technically advanced games that abuse the hardware. A lot of technically advanced games are pretty well written, and they use the hardware in ways that are well documented, or, at least, do make sense, and aren't so bad to emulate.

 

One secret to getting good performance out of the Jaguar isn't abusing it, but truly understanding its limitations and working within them. You can achieve amazing effects that way without abusing hardware.

 

One type of hardware "trick" that Virtual Jaguar "correctly emulates" is anything that saves memory bandwidth or cycles. Finding memory access patterns that are quick on the real Jaguar can double performance. But this sort of trick makes no difference in VJ, because VJ doesn't try too hard to emulate all the bugs and quirks that slow down real Jaguar games. Unlike the real Jaguar, fast code and slow is emulated at the same speed in VJ.

 

Likewise, there are some hardware limitations (or bugs) that you run into on a real Jaguar that are absent in VJ. For example, horrible things happen if you "cheat" and touch certain the blitter registers at the wrong time on a real Jaguar. Others are safe to touch much earlier. On the VJ, those problems aren't emulated. So on a real Jaguar, you might find a way to shave cycles off the next blit by writing registers earlier than usual, but not too early to cause bugs. On the VJ, this optimization works fine because there is bug to trip.

 

- KS

Edited by kskunk
  • Like 4
Link to comment
Share on other sites

A lot of technically advanced games are pretty well written, and they use the hardware in ways that are well documented, or, at least, do make sense, and aren't so bad to emulate.

 

That might be true for high level languages, but when it comes to games written in assembly language the "well written, cleanly coded" ones are usually the ones that run dog slow and look like crap. Neat looking, readable code won't win you any advantage in the speed department. With the Jaguar, that is even more important as interleaving two routines on the risc chips will gain you a tremendous speed boost.

  • Like 2
Link to comment
Share on other sites

That might be true for high level languages, but when it comes to games written in assembly language the "well written, cleanly coded" ones are usually the ones that run dog slow and look like crap.

Haha, I know just what you mean. I should have chosen my words more carefully. I have coworkers who believe in "clean code" and it is horrendously slow.

 

I meant "well written" from my perspective as an emulator author, which really is more like "properly thought out". An emulator can execute a tangled mess of unrolled interleaved cycle-counted loops with ease, the kind of code that would give a human a migraine.

 

Maybe you can help me find the phrase for this type of code: There is a type of code that gives emulator authors migraines, where you can almost see the marks from where the programmer banged his head repeatedly against it, mindlessly adjusting and fiddling until it just magically works on the real hardware, only to move on to the next piece of totally bad code.

 

Wrong values go into registers at the wrong time, illegal bits are set, dead code is inserted out of superstition -- code that touches registers in ways that are really a no-op, but are actually causing the timing to align just right -- most of the time.

 

The type of code that kills emulators often isn't well designed, it evolves through natural selection! ;)

 

-KS

  • Like 4
Link to comment
Share on other sites

Maybe you can help me find the phrase for this type of code: There is a type of code that gives emulator authors migraines, where you can almost see the marks from where the programmer banged his head repeatedly against it, mindlessly adjusting and fiddling until it just magically works on the real hardware, only to move on to the next piece of totally bad code.

 

GEMDos? :)

  • Like 1
Link to comment
Share on other sites

Dunno, but even BattleSpehere's 3D stuff doesn't look good when compared to the PS1.

We are aware its never going to be a 3d engine like the PSX but just wondering out loud if the 'Blitter Trick' improved the Jags lot at all in throwing polygons to the screen.

 

BS/G never really throws around all that many polygons at once from what I've seen. I'm not sure it is doing anything particularly taxing or performing special or 'secret' tricks with the Jaguar hardware. I'm not basing that on any technical understanding of the hardware or the game, only the fact that it appears to run in Virtual Jaguar when other games (ones that are seen as being average or poor games, both technically and gameplay-wise) do not. Maybe that's not the best way to judge these things, but from using emulators of many systems in the past, it's usually the special-case games that abuse the hardware that are the last ones to be supported.

I didn't think BS was really known for having impressive 3D. Decent/good graphics (relatively speaking), sure, but that's not its claim to fame.

It's the AI and game logic complexity as well as overall gameplay mechanics (and networked multiplayer system) that the game really shines in.

 

Of course, I can only speak from the perspective of what others have said, since I've never had the opportunity to actually play it. ;)

 

 

 

 

 

 

 

A note on my original topic:

After thinking over this again (and the previous comments on the topic), it really seems like using true YCbCr colorspace might have been a really good option, or having variable colorspace selection that's all handled on the analog end (like video DACs that can handle RGB and YUV, or if you only did YUV, have circuitry to convert that to analog RGB, if you supported RGB at all -for the US, at least, having a YUV to composite/s-video encoder would be fine). Converting colorspaces on the digital end takes far more complexity (and has far more limitations) than doing so on the analog side.

 

As such you could easily have 8-4-4 YCbCr (with 256 unique hues of 256 shades from white to black) and have 8-8-8 YCbCr for 24-bit mode. (except with luminance manipulated totally independently from color, unlike the Jag's 24-bit RGB mode). In some respects you wouldn't have as optimal a set of hues as CRY, but you'd still have a good range of colors and ones that could be logically blended (by averaging the 2 separate chroma elements) rather than the less optimal blending of CRY's colors. (even with a 256x256 look-up table there would be some disadvantages -and obviously the overhead of look-up)

Link to comment
Share on other sites

YCbCr is not a pure luma/chroma scheme ; Y = 0 doesn't result in black output if Cb and/or Cr are nonzero. CRY doesn't have this problem.

 

Remember CRY was supposed to be used for Gouraud shading and not much else. Color accuracy wasn't a concern (non-textured Gouraud isn't going to look realisatic anyways), while smooth shading was. Despite being nonstandard, it works quite well for its intended purpose ; for the rest, you've got RGB565 and RGB888.

  • Like 1
Link to comment
Share on other sites

YCbCr is not a pure luma/chroma scheme ; Y = 0 doesn't result in black output if Cb and/or Cr are nonzero. CRY doesn't have this problem.

Yes, and it has the same problem with Y=255 (not white). Normal YCbCr hues only desaturate somewhat (towards white/black) as luminance increases or decreases, so what you'd really want is a more customized scheme that desaturates totally towards white/black for all 256 chroma values. (so 128 would be full saturation at normal light levels, anything beyond that becomes more desaturated towards white -brighter and paler- and more towards black as you get closer to 0) Or, perhaps only do that for shading towards black, since full desaturation towards white isn't as useful for normal lighting and alpha blending effects.

 

In any case, the main idea would be using a custom colorspace through the video DACs and converting the analog output to standard formats (NTSC/PAL Y/C or composite, and RGB) either on-chip or with additional external video encoding circuitry. (but without the option to use common RGB transcoders directly -or even YPbPr transcoder ICs, though generally less common)

 

In some respects, that would be closer to the routes taken with early console and home computer video chips (using Y/C colorspace with part or all of the video encoding done internally with Y/C and/or composite video output lines on the chip).

 

Remember CRY was supposed to be used for Gouraud shading and not much else. Color accuracy wasn't a concern (non-textured Gouraud isn't going to look realisatic anyways), while smooth shading was. Despite being nonstandard, it works quite well for its intended purpose ; for the rest, you've got RGB565 and RGB888.

Textured gouraud shading isn't going to look realistic either . . . though obviously more in some respects than untextured polygons. (some "realistic" looking things can be done well with just smooth shaded polygons though, similarly or better than textures of the time).

 

But as to using RGB (5-6-5 -or 5-5-5 with 15-bit CRY in variable mode), aren't effects relatively poorly supported (or not at all) in 15/16-bit RGB for the OPL and Blitter? (I know shading doesn't work at all in 16-bit RGB, but I'm not sure about translucency effects -and the GPU wouldn't be well suited to doing such effects in software due to the odd 5/6 bit boundaries of the RGB elements)

Link to comment
Share on other sites

Are you actually trying to create a game on the Jaguar or are you just experimenting with different lighting effects on it?

 

YCbCr is not a pure luma/chroma scheme ; Y = 0 doesn't result in black output if Cb and/or Cr are nonzero. CRY doesn't have this problem.

Yes, and it has the same problem with Y=255 (not white). Normal YCbCr hues only desaturate somewhat (towards white/black) as luminance increases or decreases, so what you'd really want is a more customized scheme that desaturates totally towards white/black for all 256 chroma values. (so 128 would be full saturation at normal light levels, anything beyond that becomes more desaturated towards white -brighter and paler- and more towards black as you get closer to 0) Or, perhaps only do that for shading towards black, since full desaturation towards white isn't as useful for normal lighting and alpha blending effects.

 

In any case, the main idea would be using a custom colorspace through the video DACs and converting the analog output to standard formats (NTSC/PAL Y/C or composite, and RGB) either on-chip or with additional external video encoding circuitry. (but without the option to use common RGB transcoders directly -or even YPbPr transcoder ICs, though generally less common)

 

In some respects, that would be closer to the routes taken with early console and home computer video chips (using Y/C colorspace with part or all of the video encoding done internally with Y/C and/or composite video output lines on the chip).

 

Remember CRY was supposed to be used for Gouraud shading and not much else. Color accuracy wasn't a concern (non-textured Gouraud isn't going to look realisatic anyways), while smooth shading was. Despite being nonstandard, it works quite well for its intended purpose ; for the rest, you've got RGB565 and RGB888.

Textured gouraud shading isn't going to look realistic either . . . though obviously more in some respects than untextured polygons. (some "realistic" looking things can be done well with just smooth shaded polygons though, similarly or better than textures of the time).

 

But as to using RGB (5-6-5 -or 5-5-5 with 15-bit CRY in variable mode), aren't effects relatively poorly supported (or not at all) in 15/16-bit RGB for the OPL and Blitter? (I know shading doesn't work at all in 16-bit RGB, but I'm not sure about translucency effects -and the GPU wouldn't be well suited to doing such effects in software due to the odd 5/6 bit boundaries of the RGB elements)

Link to comment
Share on other sites

YCbCr is not a pure luma/chroma scheme ; Y = 0 doesn't result in black output if Cb and/or Cr are nonzero. CRY doesn't have this problem.

Yes, and it has the same problem with Y=255 (not white). Normal YCbCr hues only desaturate somewhat (towards white/black) as luminance increases or decreases, so what you'd really want is a more customized scheme that desaturates totally towards white/black for all 256 chroma values. (so 128 would be full saturation at normal light levels, anything beyond that becomes more desaturated towards white -brighter and paler- and more towards black as you get closer to 0) Or, perhaps only do that for shading towards black, since full desaturation towards white isn't as useful for normal lighting and alpha blending effects.

 

In any case, the main idea would be using a custom colorspace through the video DACs and converting the analog output to standard formats (NTSC/PAL Y/C or composite, and RGB) either on-chip or with additional external video encoding circuitry. (but without the option to use common RGB transcoders directly -or even YPbPr transcoder ICs, though generally less common)

 

In some respects, that would be closer to the routes taken with early console and home computer video chips (using Y/C colorspace with part or all of the video encoding done internally with Y/C and/or composite video output lines on the chip).

 

Remember CRY was supposed to be used for Gouraud shading and not much else. Color accuracy wasn't a concern (non-textured Gouraud isn't going to look realisatic anyways), while smooth shading was. Despite being nonstandard, it works quite well for its intended purpose ; for the rest, you've got RGB565 and RGB888.

Textured gouraud shading isn't going to look realistic either . . . though obviously more in some respects than untextured polygons. (some "realistic" looking things can be done well with just smooth shaded polygons though, similarly or better than textures of the time)

But as to using RGB (5-6-5 -or 5-5-5 with 15-bit CRY in variable mode), aren't effects relatively poorly supported (or not at all) in 15/16-bit RGB for the OPL and Blitter? (I know shading doesn't work at all in 16-bit RGB, but I'm not sure about translucency effects -and the GPU wouldn't be well suited to doing such effects in software due to the odd 5/6 bit boundaries of the RGB elements)

 

I just tried setting the RMW bit on an rgb16 image and it does work... I havn't messed with cry and the transparent feature much, but i did experiment with a quick thrown together program that had one 8 bit item in the background using the CLUT and then an object in the obj list pointing to a Cry image with the RMW bit set. The cry bitmap was just a solid single color 64x64 tile over the 8 bit object and the transparency effect worked flawlessly. However setting the RMW bit on the cry image and then putting that over an rgb16 bitmap showed some ugly results... Again i havnt really messed with it much other than a quick 15 minutes of toggling a few things. Interesting feature though for sure. Ill have to look into it more and see if i cant use it somehow in the future.

Edited by rush6432
Link to comment
Share on other sites

Are you actually trying to create a game on the Jaguar or are you just experimenting with different lighting effects on it?

No, no, it's more of a question on the design philosophy of the Jaguar, and whether there were any practical alternatives at the time. (with a few side comments on the potential for other consoles/graphics cards/etc to use non-RGB colorspaces at the time)

 

 

 

I just tried setting the RMW bit on an rgb16 image and it does work... I havn't messed with cry and the transparent feature much, but i did experiment with a quick thrown together program that had one 8 bit item in the background using the CLUT and then an object in the obj list pointing to a Cry image with the RMW bit set. The cry bitmap was just a solid single color 64x64 tile over the 8 bit object and the transparency effect worked flawlessly. However setting the RMW bit on the cry image and then putting that over an rgb16 bitmap showed some ugly results... Again i havnt really messed with it much other than a quick 15 minutes of toggling a few things. Interesting feature though for sure. Ill have to look into it more and see if i cant use it somehow in the future.

How about an RGB object over another RGB object?

And how about for blending with blitter or GPU rendered pixels? (so not 1 OPL object over another, but pixels being rendered to a common framebuffer -I know the blitter's shading logic only works on 4 and 8-bit boundaries, so not useful fro 15/16-bit RGB, but I'm not sure about any support for blending/averaging)

 

 

On another note, it is possible to do saturation (additive) lighting/blending effects to some effect in CRY. (for explosions/fire/etc where you'd want the pixels to increase in total brightness/intensity rather than be averaged) You can't use additive blending like RGB, but you could use add the Y values and average the C values. (except in CRY, you won't be able to desaturate towards white like RGB or -to some extent- YCbCr; in CRY you just go from black up to "normal" color saturation at max intensity -though maybe that would work fairly well for some additive/saturation lighting/blending effects if you made textures that mainly used paler/less saturated CRY hues and set it up so "normal" brightness was Y=128 or somewhere less than that)

 

A CRY-like colorspace that desaturated towards white as well as black (such that Y=0 is black and Y-256 is pure white for all hues) would obviously be much more useful for such effects (as well as fading out towards white for distance fog in 3D games).

It's obviously impractical to do that through interpretation to digital 24-bit RGB values (it would take a 65,536x24-bit table -192 kB- vs the simple 256x24-bit table used for CRY -not to mention a hell of a lot of errors/limits from working in 24-bit RGB), but directly outputting a custom colorspace as an analog signal and then doing the conversion to other formats (NTSC/PAL Y/C, RGB, etc) with additional analog circuitry should have been a far more realistic prospect for the time. (if they chose to use common off the shelf RGB transcoders for composite/S-video, it would obviously make the most sense to limit custom circuitry to convert only to analog RGB and leave the rest to the standard encoder chips)

It may still have been practical to support RGB modes as well (with the video DACs switching to direct RGB rather than driving the custom colorspace), but probably not a variable mode like the 15-bit RGB/CRY mode with per-pixel colorspace definition (unless the video DACs could respond quickly enough to allow changing colorspace on a per-pixel basis).

Link to comment
Share on other sites

In any case, the main idea would be using a custom colorspace through the video DACs and converting the analog output to standard formats (NTSC/PAL Y/C or composite, and RGB) either on-chip or with additional external video encoding circuitry.
That would have been complicated and expensive (analog stuff has calibration and precision problems, whereas digital stuff works in exactly the same way every time). All for supporting something that wasn't even part of the design goals. Remember Atari were penny-pinchers, too :)

 

Were there better options than CRY? Heck yes.

Were there better options than CRY considering what latitude the designers had? I wouldn't say so, and we can't rewrite history anyways.

 

You know, FPGA demo boards are quite affordable these days. If you want a console that's designed exactly as you'd like, nobody's preventing you from actually building it :)

 

Textured gouraud shading isn't going to look realistic either . . . though obviously more in some respects than untextured polygons.
Yes, that's what I mean. 1993~1994 3D technology isn't going to look photo-realistic, whatever you do, but adding textures is a huge step forward. Heck, even Wolf3D had textures ; I doubt it would have been so popular had it been done in Gouraud (yes, I know it uses raycasting instead of polygon-based 3D, but that's not the point).

 

But as to using RGB (5-6-5 -or 5-5-5 with 15-bit CRY in variable mode), aren't effects relatively poorly supported (or not at all) in 15/16-bit RGB for the OPL and Blitter? (I know shading doesn't work at all in 16-bit RGB, but I'm not sure about translucency effects -and the GPU wouldn't be well suited to doing such effects in software due to the odd 5/6 bit boundaries of the RGB elements)
The object processor's RMW effect don't work properly on RGB data, they're only intended for CRY data. If you want to do special effects in RGB, you have to handle them in software on the GPU. If you don't like 5:6:5, you can use 8:8:8 (at the expense of using twice as much bandwidth).

 

You can do some additive/subtractive lightings effects in CRY (either directly via the OP, or on the GPU), but it's pretty limited compared to "real" stuff like alpha-blending.

Link to comment
Share on other sites

Ill have to look into it more and see if i cant use it somehow in the future.

 

You're making a game that's based on lights - you're not going to get a better chance, surely ;-)

 

touché

 

 

I'm sure ill find time to play around with this more and probably implement cry for the lights and such. Would add a nice touch to the overall look.

Edited by rush6432
  • Like 1
Link to comment
Share on other sites

In any case, the main idea would be using a custom colorspace through the video DACs and converting the analog output to standard formats (NTSC/PAL Y/C or composite, and RGB) either on-chip or with additional external video encoding circuitry.
That would have been complicated and expensive (analog stuff has calibration and precision problems, whereas digital stuff works in exactly the same way every time). All for supporting something that wasn't even part of the design goals. Remember Atari were penny-pinchers, too :)

How does that to compare with the design philosophy for various old video chips that did use (semi)custom Y/C colorspace as such?

TIA, CTIA/GTIA, Apple II, TMS9918 family, VIC, VIC-II, MARIA, NES PPU, etc. (most outputting composite and/or Y/C video directly, though some TMS99 variants used YPbPr)

 

The main difference with those is that I was suggesting conversion to analog RGB rather than just outputting NTSC/PAL composite/s-video (or YPbPr for the TMS9928). Well, that and the considerably higher color depth than any of those examples. (MARIA/GTIA were the largest of those, but still only 256 colors/shades total)

However, having RGB (for the consumer models at least) really wasn't important (even in mainland Europe it was only really popular in select regions -especially France), so having video directly output as Y/C (with no need for an external video encoder) could actually be MORE cost effective. (ie no need to buy off the shelf video encoders -and no need for the added board space to accommodate them) Except that's assuming you dropped the RGB modes entirely in favor of optimizing completely for Y/C graphics.

 

In the case of actually supporting transcoding to RGB, that should be (technically) simpler than the existing hardware that converts RGB to NTSC/PAL composite/S-video, but the difference is that RGB transcoders are common off the shelf components rather than custom hardware. (be it embedded on the main ASIC or in a small support chip)

 

You know, FPGA demo boards are quite affordable these days. If you want a console that's designed exactly as you'd like, nobody's preventing you from actually building it :)

 

That's not what this is about though . . . I'm thinking mainly from a historical standpoint. (ie why they made the decisions they did for the time, and what else -if anything- might have been a competitive alternative . . . aside from the side-issue of contemporary technology companies with different limits on design specs -namely far fewer cost constraints- that may have had even more potential for pushing alternate color models -but pretty much all just used RGB)

 

4-4-4-4 RGBI/RGBY (interpreted to 24-bit RGB) would still have been an interesting option too (24-bit quality shading gradients, but blending effects closer to normal RGB -albeit with some of the limitations of 12-bit RGB), but in the context of the Jaguar it would be less efficient than CRY. (needing to manipulate 4 4-bit channels rather than a single 8-bit channel and 2 4-bit channels -and only the single 8-bit channel for shading effects . . . depending how you organized the Y/I channel of RGBI, you would be able to do some limited lighting/shading by manipulating, but to get really smooth shading effects you'd need to modify all 4 channels )

 

 

Textured gouraud shading isn't going to look realistic either . . . though obviously more in some respects than untextured polygons.
Yes, that's what I mean. 1993~1994 3D technology isn't going to look photo-realistic, whatever you do, but adding textures is a huge step forward. Heck, even Wolf3D had textures ; I doubt it would have been so popular had it been done in Gouraud (yes, I know it uses raycasting instead of polygon-based 3D, but that's not the point).

 

It depends on what you're trying to portray, and what the context is. (context could include limited memory/bandwidht -which would make textures far more limited, especially without look-up)

 

Some "realistic" objects will look far better with tactful use of smooth shaded polygons than flat shaded low-res, unfiltered, textures. Say you want to render smooth terrain (like sand dunes), human skin, some plants/trees/stone/building, shiny/satiny or metallic objects, etc.

 

Like the g-shaded environment in the Zyrnix 32x demo:

(albeit voxels would be better suited for a lot of smooth terrain type stuff, but that's a separate issue ;))

 

There's cases where both are important . . . untextured models worked very well for X-Wing and Tie Fighter (especially with the latter's gouraud shading) while 2D scaled objects and textured spans worked well for Catacomb3D/Wolf3D/Doom/Duke3D/etc. (and Quake, etc)

Wing Commander III took the texture mapped route (all flat shaded by the look of it), but with a generally different graphical style than Tie Fighter.

 

Of course, that's aside from smooth shaded AND texture mapped objects. (though some things sill favor just shading unless you have high res/filtered perspective-correct textures)

 

Then there's other contextual issues, like color depth:

for 256 paletized color stuff, you generally can either work with a very limited/optimized palette capable of somewhat decent gouraud shading, or you could optimize colors for textures (with even more limited shading and more noticeable posterization). You really can't get very decent smooth shading in 256 colors without dithered shading, but few games opted to do that -the Tie Fighter engine was among those. (albeit banded/posterized shading -like DOS tomb raider- still looks better than faceted flat shaded lighting IMO -in some cases, it's probably better to use no light-sourcing at all)

 

Plus, smooth shading is useful for much more than lighting effects. Even with no light sourcing whatsoever, you could use gouraud shading to smooth out polygonal models or in leu of textures for some things. (ie some limited cases where shading gradients could be used to create some texture-like patterns on models -especially for decal polygons)

 

 

But as to using RGB (5-6-5 -or 5-5-5 with 15-bit CRY in variable mode), aren't effects relatively poorly supported (or not at all) in 15/16-bit RGB for the OPL and Blitter? (I know shading doesn't work at all in 16-bit RGB, but I'm not sure about translucency effects -and the GPU wouldn't be well suited to doing such effects in software due to the odd 5/6 bit boundaries of the RGB elements)
The object processor's RMW effect don't work properly on RGB data, they're only intended for CRY data. If you want to do special effects in RGB, you have to handle them in software on the GPU. If you don't like 5:6:5, you can use 8:8:8 (at the expense of using twice as much bandwidth).

 

You can do some additive/subtractive lightings effects in CRY (either directly via the OP, or on the GPU), but it's pretty limited compared to "real" stuff like alpha-blending.

You can go a bit beyond that in CRY using look-up tables, but that's limited too. (and adds overhead -especially for larger tables)

Link to comment
Share on other sites

I think during the early and mid 90s texture mapping and smooth shading wasn't really as prominent as it was in 1995 and afterwards thus 3D polygon based games really were just seeing the light of day in the public eyes as far as the trending tides in the industry. Texture mapping and smooth shading was just coming out of the closet so to speak; keep in mind that both the Atari Jag and the 32X are still products of the development of what 3D would would become today. With Doom using simple ray-casting that can be found in use in gaming as early as the early 80s on Atari 8bits, the secrets of texture mapping and smooth shading were in the hands of companies like Silicon Graphics and the likes who invested a lot of time and money into the technology. Considering Sega's approach their 3D development Virtual Racing arcade technology with them going straight to NASA (if I recall well) they just had a better edge on the perspective what it takes to make a top notch 3D hardware; but the Jaguar still has a whole lot of juice as well with it's technology and was the only game console out there with a 64bit blitter. Both could have succeeded had they not rush the systems out too soon. I just don't thing the full spectrum of what was out there with 3D was available in 1992-93 like it was in 1995 and up; some people was just not in the loop like that besides whoever could define the market, would certainly have a monopoly on it, which turned out to be the Sony Playstation 1.

 

Then there's other contextual issues, like color depth:

for 256 paletized color stuff, you generally can either work with a very limited/optimized palette capable of somewhat decent gouraud shading, or you could optimize colors for textures (with even more limited shading and more noticeable posterization). You really can't get very decent smooth shading in 256 colors without dithered shading, but few games opted to do that -the Tie Fighter engine was among those. (albeit banded/posterized shading -like DOS tomb raider- still looks better than faceted flat shaded lighting IMO -in some cases, it's probably better to use no light-sourcing at all)

 

Plus, smooth shading is useful for much more than lighting effects. Even with no light sourcing whatsoever, you could use gouraud shading to smooth out polygonal models or in leu of textures for some things. (ie some limited cases where shading gradients could be used to create some texture-like patterns on models -especially for decal polygons)

 

Another company I look at as far as the use of 8bit colors is SNK and how the used dithering for mostly all of the games after 1994. The Jag was suppose to compete with the 3DO as far as technological and graphical feat but I think Atari should have continued to support the Lynx until they had a finished product with the Atari Jag 64. Despite that, any use of optimization with 256 colors is better then no optimization at all plus I don't think that smooth shading was used for anything more then your typical gouraud shading back then which was the normal for its day. Dithering was there back then and used a lot in the old DOS games both in 2D and 3D, which was something that that the 16bits could do even with coprocessor on a cartridge. SNK used it to death in their 2D platforms and it was certainly an open door to some decent effects despite it'shortcomings and still is today. I look at how the demo scenes use it in the Atari 8s and the C64 computers it blows me away.

 

CRY seems to make for a smaller file size and seems to be more of an optimized hardware solution... Also Atari is known for their exclusiveness with their hardware having nothing but the Atari way of doing things exclusively on Atari machines. It was probably some business model they use in order to keep programmers making Atari stuff.

Link to comment
Share on other sites

However, having RGB (for the consumer models at least) really wasn't important (even in mainland Europe it was only really popular in select regions -especially France), so having video directly output as Y/C (with no need for an external video encoder) could actually be MORE cost effective. (ie no need to buy off the shelf video encoders -and no need for the added board space to accommodate them) Except that's assuming you dropped the RGB modes entirely in favor of optimizing completely for Y/C graphics.

 

Not sure where you are getting this from? RGB (or SCART) type cables are available and popular for pretty much every console in the UK and I dare say the whole of Europe pre-HD consoles. I had SCART for every console I own, it is the 1st addon I would purchase before a second controller, and that is not just me, the cables were often bundled in with new units, or displayed in quantity along with the other accesories in the stores.

 

I think the French Jag doesn't have a modulator? (I may be wrong on this, but I have one that has no modulator or space for one, I have assumed it to be French version)

  • Like 3
Link to comment
Share on other sites

I think the French Jag doesn't have a modulator? (I may be wrong on this, but I have one that has no modulator or space for one, I have assumed it to be French version)
Correct. French Jaguar don't have the modulator (there's an unused area on the PCB instead), and there are no holes in the plastic where the RF connector and channel selector switch should be.
  • Like 1
Link to comment
Share on other sites

Well, that and the considerably higher color depth than any of those examples. (MARIA/GTIA were the largest of those, but still only 256 colors/shades total)
What can I say? You answered your own question there :)

 

Except that's assuming you dropped the RGB modes entirely in favor of optimizing completely for Y/C graphics.
...which made sense in the Atari 2600 era, but would have been catastrophic in 1993, when all consoles and computers(used to create the graphics) had been using the RGB colorspace for a long time already.

 

That's not what this is about though . . . I'm thinking mainly from a historical standpoint. (ie why they made the decisions they did for the time, and what else -if anything- might have been a competitive alternative . . . aside from the side-issue of contemporary technology companies with different limits on design specs -namely far fewer cost constraints- that may have had even more potential for pushing alternate color models -but pretty much all just used RGB)
About everything is possible if you have enough money to spend -- heck, in that case they could have added realtime alpha-blending and lots of other cool stuff. But I'm not sure I see the point in discussing about "what-if..." really.
  • Like 1
Link to comment
Share on other sites

Like the g-shaded environment in the Zyrnix 32x demo:

http://www.youtube.com/watch?v=vySOZUcOdng

 

Stop for a minute there. Demo? From Scavenger inc? You just shot yourself in the foot!

 

Yes, that video does look rather sexy (if of course your cup of tea is 5cm viewing distance :P), but there's no way in hell a well coded demo will compete with game code. Since good demos are cheating massively to show things that are thought of impossible, this is in no way a real benchmark for the console. For example, the whole geometry can be pre-calced, so the only thing that the console does is render polygons on screen. For example, take a look at this: http://dhs.nu/video.php?ID=300 - seek around the 1 min mark. That's exactly what I describe above and what can easily happen in the video you posted. I could go on and on for ages about how they can cheat in each screen but I won't. Point is, the 32X can't do that in realtime and I'd say most consoles of that era, so it's pointless to speculate.

 

(albeit voxels would be better suited for a lot of smooth terrain type stuff, but that's a separate issue ;))

 

No they wouldn't actually, that'd look awful.

Edited by ggn
  • Like 1
Link to comment
Share on other sites

Like the g-shaded environment in the Zyrnix 32x demo:

http://www.youtube.com/watch?v=vySOZUcOdng

 

Stop for a minute there. Demo? From Scavenger inc? You just shot yourself in the foot!

 

Yes, that video does look rather sexy (if of course your cup of tea is 5cm viewing distance :P), but there's no way in hell a well coded demo will compete with game code. Since good demos are cheating massively to show things that are thought of impossible, this is in no way a real benchmark for the console. For example, the whole geometry can be pre-calced, so the only thing that the console does is render polygons on screen. For example, take a look at this: http://dhs.nu/video.php?ID=300 - seek around the 1 min mark. That's exactly what I describe above and what can easily happen in the video you posted. I could go on and on for ages about how they can cheat in each screen but I won't. Point is, the 32X can't do that in realtime and I'd say most consoles of that era, so it's pointless to speculate.

 

(albeit voxels would be better suited for a lot of smooth terrain type stuff, but that's a separate issue ;))

 

No they wouldn't actually, that'd look awful.

 

Well... Chilly Willy did port a full textured 3D engine to the 32X using full polygon textured and shaded images. It's a little slugish, but at least now we know what the 32X can do... It just need optimizing.

http://www.youtube.com/watch?v=ahISpH1eMzg

 

To me Doom shows a lot of potential even though it really isn't polygon based and it's as close to smooth texture as you can get on the Jag.

http://www.youtube.com/watch?v=gOyhWT9mQy4

Edited by philipj
Link to comment
Share on other sites

Well... Chilly Willy did port a full textured 3D engine to the 32X using full polygon textured and shaded images. It's a little slugish, but at least now we know what the 32X can do... It just need optimizing.

http://www.youtube.com/watch?v=ahISpH1eMzg

That doesn't show the newest build iirc. (which is considerably faster -though still without heavy optimziation: done mostly in C iirc . . . )

 

 

 

Like the g-shaded environment in the Zyrnix 32x demo:

http://www.youtube.com/watch?v=vySOZUcOdng

 

Stop for a minute there. Demo? From Scavenger inc? You just shot yourself in the foot!

 

Yes, that video does look rather sexy (if of course your cup of tea is 5cm viewing distance :P), but there's no way in hell a well coded demo will compete with game code. Since good demos are cheating massively to show things that are thought of impossible, this is in no way a real benchmark for the console. For example, the whole geometry can be pre-calced, so the only thing that the console does is render polygons on screen. For example, take a look at this: http://dhs.nu/video.php?ID=300 - seek around the 1 min mark. That's exactly what I describe above and what can easily happen in the video you posted. I could go on and on for ages about how they can cheat in each screen but I won't. Point is, the 32X can't do that in realtime and I'd say most consoles of that era, so it's pointless to speculate.

I wasn't arguing the demo would match game performance (though one of those demos is the prototype game engine for the 32x version of Amok -and if those demos aren't using the Genesis hardware, you'd have a 7.67 MHz 68k and 64k of RAM to handle game logic and AI -not to mention if the Sega CD was used ;))..

 

My point was just on the graphical style of the gouruad shaded landscape looking better than flat shaded low-res textures would.

 

(albeit voxels would be better suited for a lot of smooth terrain type stuff, but that's a separate issue ;))

 

 

No they wouldn't actually, that'd look awful.

Voxels kick the crap out of polygons, at least for terrain and with system limitations of the time (systems not totally oriented towards polygon rendering, that is). It's a damn shame that Jag games didn't put an emphasis on that. (starting with coarser Comanche-like voxels and moving towards

 

 

 

 

 

 

However, having RGB (for the consumer models at least) really wasn't important (even in mainland Europe it was only really popular in select regions -especially France), so having video directly output as Y/C (with no need for an external video encoder) could actually be MORE cost effective. (ie no need to buy off the shelf video encoders -and no need for the added board space to accommodate them) Except that's assuming you dropped the RGB modes entirely in favor of optimizing completely for Y/C graphics.

 

Not sure where you are getting this from? RGB (or SCART) type cables are available and popular for pretty much every console in the UK and I dare say the whole of Europe pre-HD consoles. I had SCART for every console I own, it is the 1st addon I would purchase before a second controller, and that is not just me, the cables were often bundled in with new units, or displayed in quantity along with the other accesories in the stores.

No, I mean it wasn't very common to have higher end TVs with SCART input in general until the late 90s (at the earliest). At least from the comments I've seen from UK gamers of the time, a TON of users were still working with RF-only TVs in the mid 1990s. (a high percentage in the US too, but probably not as high -though a ton of people also probably had composite but no S-video inputs -and, really, prior to DVD becoming popular, game consoles and computers were among the few TV-capable devices to even bother supporting anything better than composite)

Plus, didn't the UK (unlike mainland Europe) also sell TVs with RCA connectors for composite video and audio? (vs Europe going exclusively to SCART for all non-RF input to TVs until HDMI)

 

I'd also gotten the impression that there were serious complaints over the Saturn (and I think PSX) launching in Europe with no RF adapter pack-in. (and led to Nintendo's choice of bundling an RF box with the N64 when it launched in the UK -vs only composite AV cables for the standard US set)

 

I think the French Jag doesn't have a modulator? (I may be wrong on this, but I have one that has no modulator or space for one, I have assumed it to be French version)

Mainland Europe is a separate issue (especially France) and some countries mandated SCART for all TVs during the 80s. (Fance was the first to do so, making SCART a requirement on all TVs sold in France by 1980)

Thus, it's not surprising that RGB-native devices were released in France with only RGB connections.

 

The French Master System II was made with no RF modulator or composite video output, only the RGB and audio lines connected on the AV port. (I think the French versions of the N64 were also the only models to support RGB -without modding)

 

 

 

 

 

 

 

 

 

I think during the early and mid 90s texture mapping and smooth shading wasn't really as prominent as it was in 1995 and afterwards thus 3D polygon based games really were just seeing the light of day in the public eyes as far as the trending tides in the industry. Texture mapping and smooth shading was just coming out of the closet so to speak; keep in mind that both the Atari Jag and the 32X are still products of the development of what 3D would would become today.

At the time of the Jaguar's release, you saw a lot of flat shaded polygon arcade games and a few flat shaded 3D computer games (as well as some significant pseudo 3D games -Doom was released in '93, Comanche and Wolf3D were the year before, not to mention Wing Commander in 1990 among others).

 

However, texture mapped arcade games were appearing on high-end arcade boards by then as well. (namely Namco's System 22 and Sega's Model 2 -the latter not capable of hardware g-shading, only spectral reflection and texture mapping -and also rendering in quads)

 

Smooth shaded polygonal renderers on computers/consoles didn't ever spread widely before Texture mapping got big. (by the time CPUs got fast enough to handle smooth shading well, they could also handle texture mapping -and RAM capacity on average machines had increased to make things even more favorable; plus, 256 color graphics made g-shading even less attractive)

There were very few pure g-shaded polygon engines used in computer games. (Tie Fighter and the 1994 X-Wing CD release -using the Tie Fighter engine- are among the very few I can think of -both of those used realtime dithering to allow decently smooth shading in 256 colors; that's a feature even rarer in software rendered PC games than gouraud shading itself -even the 256 color software render supported by Tomb Raider II, which went up to 1440x900 with perspective correct textures, had no option for dithering, though the accelerated renderer did)

 

With Doom using simple ray-casting that can be found in use in gaming as early as the early 80s on Atari 8bits, the secrets of texture mapping and smooth shading were in the hands of companies like Silicon Graphics and the likes who invested a lot of time and money into the technology.

 

Were ray-casting engines even being used back then? I'd thought those didn't become common until the early 90s (with a handful used in the late 80s -along with many other developers trying to do similar effects using 3D polygons -I think MIDI Maze used ray-casting though)

 

I know there were the fractal engines and several fully pre-rendered corridor/maze/dungeon crawler games (like Tunnel Runner), but not actual ray-casting engines.

 

Considering Sega's approach their 3D development Virtual Racing arcade technology with them going straight to NASA (if I recall well) they just had a better edge on the perspective what it takes to make a top notch 3D hardware;

It was GE Aerospace (which later became part of Lockheed-Martin -which is why Sega's later 3D boards were done with Lockheed Martin). Note that Atari Games had their own 3D boards working a several years earlier with the TMS340 based Hard Drivin' boards. (Namco followed shortly after with the far more powerful -and expensive- System 21 -still a couple years ahead of Sega)

 

In spite of them being a very advanced tech company, the results under GEA/L&M didn't really end up fitting the established 3D standards (seen in computer games/applications, SGI workstations, or the subsequent market wide standards set in the mid/late 90s). One glaring difference was the use of quads rather than triangles (triangles had been the common de-facto standard for polygonal 3D on computers up to that point, and certainly became the definitive standard after that as well -and the 3DO, Saturn, and Nvidia NV1 -and canceled NV2- chipset also used quads).

There was also the fatal flaw of high cost that would plague every one of Sega's3D arcade boards until the Dreamcast derived Naomi. (an issue heavily exacerbating Sega's declining arcade business)

The model 2 also used quads exclusively and oddly lacked support for smooth shading (unlike Namco's System 22).

 

On top of that, Sega was designing an arcade board, something totally exclusive to their in-house software teams. Thus they had no precedence for catering to market standards as 3rd parties (generally) wouldn't even be using it.

And in that respect, Flare had a much better perspective for 3D game/hardware design . . . as did pretty much any significant R&D team that worked on the level of mass market computer/console platforms of the time.

 

Flare had to look at things from the perspective of 1989-1991 (when the Jaguar's core logic designs were being laid down -mostly in 1990), so a ton of guess work and foresight were necessary on their part. They had 2D games to consider, and the limited polygonal arcade and computer games of the time as well as pseudo 3D (though the height map engines -Doom/voxel games- had yet to really emerge in the mass market -Wolf 3D and Commanche really started that).

They also saw smooth shading as an untapped potential boon for 3D graphics, though also included limited texture mapping support (mainly intended for rotated/scaled/warped effects similar to what the Sega CD could do).

The Jaguar was also designed with a considerable amount of flexibility, leaving some fairly powerful general purpose resource available for unpredictable market requirements. (they also did all of that within constraints of a limited R&D budget and emphasis on low manufacturing costs -part of which being the extremely ambitious use of .5 micron silicon)

 

Comparing the jaguar in that context (foresight required, limited funds, very small design team, very tight and efficient low-cost design, etc), the Playstation doesn't look all that impressive, let alone the Saturn. (both of which designed later, with more market information to build on, and with far greater R&D resources) I'd add the 32x to that, but it's kind of impressive in its own right since it went from a scrawl on a napkin to mass market in about 8 months. (less than 6 months for the preproduction development systems)

 

 

Another company I look at as far as the use of 8bit colors is SNK and how the used dithering for mostly all of the games after 1994.

The Neo Geo used 4-bit graphics (15 colors per sprite) like most other 2D arcade boards in the late 80s and early 90s. It has 256 palettes of 15 colors each.

 

The Jag was suppose to compete with the 3DO as far as technological and graphical feat but I think Atari should have continued to support the Lynx until they had a finished product with the Atari Jag 64.
It ended up competing in the 3DO's market (sort of), but the design concept was for a SNES/Genesis killer with awesome 2D and impressive 3D for the time.

 

And this is going off topic (more back to the old 1993 Jag topic), but:

As for the Lynx or Atari releasing the Jag later/more polished. Atari Corp was pretty much screwed in 1993 (and progressively less salvageable from 1989 onward). There's some decisions that probably could have led to Atari doing better than they did (maybe even had them continue to exist in a niche role to this day), but there wasn't much they could do at that point.

Releasing the jaguar any later definitely wouldn't have helped in that context . . . what they needed was more money in 1993 to afford a proper launch (more/better software, more/better advertising, better software tools, a full launch in the US and Europe, etc) among other things they simply didn't have at the time. (the test market was a desperate attempt for enticing investors -which did pay off in the short run) Aiming at a somewhat different market position and going with CD-ROM rather than cart also may have been a very significant decision. (especially in terms of competitive software prices and 3rd party publisher interest -with the far lower risk of CD-ROM publishing)

 

With Atari's state at the time, you could even argue that they'd have been better off canceling the Jaguar and trying to drum up interest in the Lynx (software, investors, consumers, etc). In hindsight it's hard to tell what was the better option: the Jaguar ended up losing them money but also gave them investor interest, stock price boost, and investor interest in the short run. (which arguably facilitated winning some outstanding lawsuits in 1994 in ways that the Lynx may not have been able to)

 

Overall, Atari was pretty closed to finished before they got started with the Jaguar (on the market). By 1993, they'd managed to run the computer business into the ground and basically had no presence on the console market since 1990 (when the 7800 and 2600 more or less died) with the Lynx not making a big splash either (though doing somewhat better in parts of Europe). There may have been other factors, but the fact that Jack Tramiel retired in late 1988 and Michael Katz left in early 1989 match up very well with the beginning of a deep downward spiral for Atari Corp. Atari Corp had gone from a new company (with parts of Atari Inc grafted on) steeped in debt and struggling to make things work in 1984/85 to a fortune 500 company in 1988 with the second highest selling game console in the US (albeit a distant 2nd) and the most popular 16-bit home computer in Europe -Sega even tried to license the Mega Drive to them for US distribution in 1988 due to the significantly higher market share they had over Sega. (but it all began to fall apart after that, with a marked decline in management performance following Jack and Katz' departures -no 4th gen game console, shaky management of the ST without a good/competitive evolution of the line, and shaky support/management of the Lynx -and handhelds were/are a difficult market to push into in the west, and Atari had no chance in Japan unless they could get a very strong licensee to distribute it)

 

 

 

Despite that, any use of optimization with 256 colors is better then no optimization at all plus I don't think that smooth shading was used for anything more then your typical gouraud shading back then which was the normal for its day. Dithering was there back then and used a lot in the old DOS games both in 2D and 3D, which was something that that the 16bits could do even with coprocessor on a cartridge.

I'm not talking simple checkerboard dithering for flat shading, but far more complex realtime dithered interpolation like X-Wing and Tie Fighter used. (as well as the Playstation GPU and many PC accelerator cards -I think some PS2 games also used it)

 

Actually, X-Wing/Tie Fighter seemed to use more intensive dithering than the simple pattern-block/bayer dithering used in the PS1 and most PC video cards of the late 90s. It looks more like floyd-steinberg dithering.

http://media.giantbo..._wing_super.png

http://images.wikia....EFighterDOS.png

http://www.btscene.e...Collection.html

Edited by kool kitty89
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...