Jump to content

Photo

How was the object processor used?


69 replies to this topic

#1 jregel OFFLINE  

jregel

    Combat Commando

  • 2 posts

Posted Sat Jun 18, 2011 5:05 AM

I've read in the FAQ and technical docs that the object processor can be used to act as a sprite engine, tilemap engine, bitmap display etc.

But how was it used in reality? I've read that Tempest 2000 is basically using the object processor to read a frame buffer for the main display and overlay the score as a sprite.

Did most games work this way (using the Blitter to draw to framebuffers) or was the object processor used in clever ways?

Thanks

JR

#2 philipj OFFLINE  

philipj

    Moonsweeper

  • 477 posts
  • Location:Birmingham, Alabama

Posted Sat Jun 18, 2011 7:03 AM

My general understanding of the object processor is that a display list is created; the object processor reads this display list and converts the list into graphics by creating two scanlines. When one scanline is created, it shoots the scanline to screen while simultaniously creating the second scanline to be displayed on screen. It's a looping process until there's no more list to display. That's just my understanding of the "Object Processor;" Now as far how Tempest work, you'll have ask someone else about the technicalities for T2K.

#3 kskunk OFFLINE  

kskunk

    Moonsweeper

  • 455 posts
  • Location:Atari Mecca Sunnyvale, CA

Posted Sat Jun 18, 2011 11:19 AM

Did most games work this way (using the Blitter to draw to framebuffers) or was the object processor used in clever ways?

It's probably something like half and half. The blitter is easier to use and capable of more sophisticated graphics, but slower. The object processor is incredibly fast, but somewhat limited in effects and requires more advanced programming, especially in games with a lot of sprites.

Tempest 2K uses the blitter for almost everything. Effects like the smooth shading and 'exploding pixel zoom' can't be done by the object processor. Another hint that it uses the blitter is the framerate. It sometimes dips as low as 15FPS.

Defender 2K uses the object processor for almost everything. This is why the frame rate is a solid 60FPS, even in very frenzied scenes.

Jeff Minter later stated that he was prouder of his work in D2K. While T2K used the machine in straightforward ways (via the 68K and blitter), D2K really squeezed the performance out with some really tricky programming (via the GPU and object processor).

Some other examples:

Val D'Isere Skiing uses only the object processor to achieve such high framerates.
Doom uses only the blitter to achieve realistic shaded 3D effects.
AvP uses a combination of blitter for the 3D parts of the screen, and the object processor for the 2D overlays/maps.

Maybe this gives you some hints to work out which games use which system.

- KS

#4 philipj OFFLINE  

philipj

    Moonsweeper

  • 477 posts
  • Location:Birmingham, Alabama

Posted Sat Mar 17, 2018 12:31 AM

For kicks I googled the word "Object Processor" and this old topic popped up in the search engine... A great throw-back topic; very simple, but very direct to the point. I think any real gains to be made in making really fast, high frame rate stuff on the Jag will come from making good use of the "Object Processor." The Blitter is phenomenal in and of itself, but it doesn't seem to have the kind of speeds you could get from the Object Processor less you use the blitter minimally for shading or other task like good old T2K.

 

Very nostalgic.


Edited by philipj, Sat Mar 17, 2018 12:32 AM.


#5 VladR OFFLINE  

VladR

    Stargunner

  • 1,424 posts
  • Location:Montana

Posted Sat Mar 17, 2018 11:46 AM

I think any real gains to be made in making really fast, high frame rate stuff on the Jag will come from making good use of the "Object Processor."

That's true only for 2D games, which didn't really matter to Atari much at the time of the jag's short commercial lifespan.

For 3D, the OP is actually worse than the old simple Antic in Atari 800, as you have to keep refreshing the phrase of the framebuffer's OP data (which the OP shamelessly destroys each frame), regardless of the actual framerate, so it eats cycles during vblank.

 

The Blitter is phenomenal in and of itself, but it doesn't seem to have the kind of speeds you could get from the Object Processor less you use the blitter minimally for shading or other task like good old T2K..

I find that Blitter is extremely overrated in terms of its performance. When I did some benchmarks for bitmaps scaling through Blitter (both when bitmap was in the tiny cache, and in main RAM), the pixel throughput was truly laughable. Even using the fast cache and tiny little bitmaps didn't help much (33%, if I recall correctly, compared to main RAM). And that was just 8-bit bitmaps, forget the 16-bit ones...

 

Now, based on my benchmarks, you could do something like Afterburner running at 20-30 fps, just via scaling on Blitter. So, it's - arguably - fast enough.

 

 

I will, however, grant Blitter one important killer feature: parallel processing.

 

 

If you, as a coder, are willing to complicate the engine design, you can keep GPU processing other stages of the pipeline, while Blitter - nicely in parallel - shades the current scanline. I have a compile-time flag that enables/disables waiting for Blitter and the performance differences are quite staggering.

Then again, to be fair, so are the debugging/testing/development complications :lol:



#6 philipj OFFLINE  

philipj

    Moonsweeper

  • 477 posts
  • Location:Birmingham, Alabama

Posted Sat Mar 17, 2018 11:53 PM

I find that Blitter is extremely overrated in terms of its performance. When I did some benchmarks for bitmaps scaling through Blitter (both when bitmap was in the tiny cache, and in main RAM), the pixel throughput was truly laughable. Even using the fast cache and tiny little bitmaps didn't help much (33%, if I recall correctly, compared to main RAM). And that was just 8-bit bitmaps, forget the 16-bit ones...

 

 

You may have to use very low color depth of 4bits or even 3bits just to get any significant speeds especially if you talking about a 3D texture mapping situation... I think the PS1 used 4bits dithered images using its hardware to pull off considerable speeds. The Jag seems to linger between being a really fast sprite machine more than the 3D power house it should've been so that's something to consider also. I think it was the Tempest 2000 that gave the Blitter its mythical status thanks to Jeff Minter.

 

Considering how Doom works, the GPU seems to be treated like a VGA card rather then a 3D chip, where the main processor does all of the real work and the VGA displays pixel very quickly using tricks like mode-x and what-have-you. The only difference is with the Jaguar, the display-er is smarter thanks to Jag-RISC with a Blitter to do better than what a VGA mode-X could do the blitter being programmable and all. If you think in those terms, I think a proof of concept outside of the Jaguar would be a more hopeful approach just as Doom was originally a PC game ported to the Jag that was proven to work on old 386 machine.

 

That's true only for 2D games, which didn't really matter to Atari much at the time of the jag's short commercial lifespan.

For 3D, the OP is actually worse than the old simple Antic in Atari 800, as you have to keep refreshing the phrase of the framebuffer's OP data (which the OP shamelessly destroys each frame), regardless of the actual framerate, so it eats cycles during vblank.

 

Certainly the Jag can do better than a Neo Geo or even a Sega CD hardware that can do zooming, scaling, and rotating... I think games like "Super-cross and Atari Karts" is phenomenal on the Jag with their pesudo 3D effects. That's where the Jag strength is that I hope to one day take advantage of in making a 3D engine.

 

If you, as a coder, are willing to complicate the engine design, you can keep GPU processing other stages of the pipeline, while Blitter - nicely in parallel - shades the current scanline. I have a compile-time flag that enables/disables waiting for Blitter and the performance differences are quite staggering.

Then again, to be fair, so are the debugging/testing/development complications :lol:

 

 

 

No surprise there... The Jag needs a better SDK. During the last shelf life on the Sega Saturn, Sega released a better SDK for the Saturn for one last coup-de-grace. Keep fighting the good fight. lol



#7 VladR OFFLINE  

VladR

    Stargunner

  • 1,424 posts
  • Location:Montana

Posted Sun Mar 18, 2018 2:37 AM

 You may have to use very low color depth of 4bits or even 3bits just to get any significant speeds especially if you talking about a 3D texture mapping situation... I think the PS1 used 4bits dithered images using its hardware to pull off considerable speeds.

When I was working on the road texturing in HighRes, the 4-bit scanline blitting proved to be really fast. I didn't get to fix all the glitches, so all my vids were taken in 8-bit colorspace - but it does make a huge difference, especially in resolutions above 768x200. Even at 1536x200, jag has bandwidth to do road texturing at over 30 fps, provided you use 4-bit texture for road (which, frankly, is enough - e.g. 16 shades of grey).

 

When you - however - lower the bitdepth further - to 2-bit, the difference is negligible, even at 1536x200. Below 768x200 it's basically unmeasurable. So, it does not make sense to make 4-color games from performance standpoint. Though, I imagine, at the highest resolution, you could do a lot of scaling. Haven't tested that at 2-bit depth.

 

The Jag seems to linger between being a really fast sprite machine more than the 3D power house it should've been so that's something to consider also.

Jag can clearly do super high resolutions and flatshading, at 30-60 fps. Unfortunately, that's not where the market went, so this era was completely bypassed. If you look at many Sega 32x games, they could have been phenomenal on jag, in high res.

Recently I was watching some vids on one Japanese arcade (name escapes me at the moment, but it had like 4 RISC chips and 3-4 additional other processors) that was using laserdisc and all games were flatshaded. They look absolutely stunning in high res. Even by today's standards - because there's no pixelated grainy textures, everything is nice, sharp and clean. Honestly, I wouldn't even go for Gourard (though, that's more passable as simple texturing) - but that's personal preference....

 

 

  I think it was the Tempest 2000 that gave the Blitter its mythical status thanks to Jeff Minter.

Which is pretty weird, considering the brutal framedrops in Tempest. But I guess, it's the same thing like people claiming that jag's wolfenstein does 60 fps, while if you play it for 5 minutes, you can clearly notice framedrops to 15-20 fps.

If it was a CD game, then there could be a hypothetical scenario that it's streaming from CD. But it's a bloody cart game !

 

Youtube is actually pretty good at averaging those framedrops, so when I finally popped the wolf cart to my jag and started playing it, it was WTF every few minutes, as my framerate expectations were vaaaastly different from watching it on YT :)

 

I have literally stopped playing jag's wolf after 2-3 levels, as I simply could not reconcile the totally falsified public image of the game with the actual stuttering reality. I even tried again 2 months ago. This time with proper expectations. Nope, as much as I love Wolfenstein, it didn't help and in third level quit :lol:

 

Playing it live means no youtube interpolation algorithm. Just good, old-fashioned, real-life, unmasked, honest-to-god framedrops :lol:

 

 Certainly the Jag can do better than a Neo Geo or even a Sega CD hardware that can do zooming, scaling, and rotating... I think games like "Super-cross and Atari Karts" is phenomenal on the Jag with their pesudo 3D effects. That's where the Jag strength is that I hope to one day take advantage of in making a 3D engine.

Not familiar with NeoGeo's specs, so can't comment on that one. I wouldn't say Atari Karts is a good example of jag's strengths - its framerate looks slightly better than Supercross, but is far from something I would voluntarily endure for more than 10 minutes.

 

Then again, if the said person perceived jag's wolf as 60-fps-no-framedrop-kind-of game, then I guess Atari Karts could work for them :lol:

 

 No surprise there... The Jag needs a better SDK. During the last shelf life on the Sega Saturn, Sega released a better SDK for the Saturn for one last coup-de-grace.

Which is pretty ironic, considering:

- Saturn had an absolutely amazing coding environment - e.g.  C-to-Risc compiler - whole games have been written straight in C, making the process 1-2 orders of magnitude faster than on jag

- Saturn does clipped HW texturing - all you have to do, as a coder, is 3D transform vertices and give the SDK a list of polygons. In C, that takes about a day or two of work. If it's your first time doing 3D :lol:

- When I was recently browsing Saturn's SDK, it really felt super high quality - like Microsoft's SDKs with all the docs, tools, SDKs

 

I don't know what happened to Saturn, and while I haven't seen PS1's SDK, I find it more sad (than, say, jag) that Saturn didn't stand a chance against PS1...



#8 LinkoVitch ONLINE  

LinkoVitch

    River Patroller

  • 2,504 posts
  • Location:Manchester UK

Posted Sun Mar 18, 2018 8:38 AM

Object processor is a cunning way to not need to keep updating a frame buffer to display graphics on the screen.  You cannot display anything on a jag (beyond changing the screen colour) without using the Object processor.

 

Simply the Object Processor has a list of areas of jaguar memory that contain bitmap images, be they sprites, or the output from a 3D render and displays them on the screen as per the list.  You could think of it as the sprites being cut out pictures from a magazine, being placed in order and position on the page of a scrapbook.  This is fairly analogous to how the object processor does it.  If you put a picture of a motorcycle at 0,0 then a picture of a lawn mower at 0,0 what you will get would be a motorcycle with a lawnmower overlayed on top of it :)  (There is extra magic in the OP allowing you to handle things like transparency of pixels if you wish, so not everything has a solid background colour unless you want it to).

It can also do some rudimentary scaling of sprites and some other clever bits and pieces to allow you to make the lists more efficient and dynamic, but for the majority of games, it is responsible for piecing together what you see on the screen from the various image components in the Jag's RAM at the time.

 

 

That's true only for 2D games, which didn't really matter to Atari much at the time of the jag's short commercial lifespan.

For 3D, the OP is actually worse than the old simple Antic in Atari 800, as you have to keep refreshing the phrase of the framebuffer's OP data (which the OP shamelessly destroys each frame), regardless of the actual framerate, so it eats cycles during vblank.

 

Not quite, it doesn't destroy the list at all.  It updates the list which isn't always useful, but can be used as a simple and very efficient way to do animation.

 

What do you mean "keep refreshing the phrase" ? that makes no sense at all??

The simplest way to keep displaying the same object list each frame is to generate it once and then just copy it back over the list after the OP has processed it.  A simple blitter copy will do it lightning fast, but you can even do it using the 68K depending on the size of the list.

 

Of course if your game is running at 50/60 FPS and has a suitably dynamic display of sprites you may as well just generate the next list ready for the next frame and point the OP at the new list when it is done with the 1st.

 

Having the op not modify the list would be more convenient programmatically but greatly increase the complexity and limitations of the OPs silicon in the process, or have greater impact on the already limited memory bus.



#9 philipj OFFLINE  

philipj

    Moonsweeper

  • 477 posts
  • Location:Birmingham, Alabama

Posted Mon Mar 19, 2018 12:41 AM

Jag can clearly do super high resolutions and flatshading, at 30-60 fps. Unfortunately, that's not where the market went, so this era was completely bypassed. If you look at many Sega 32x games, they could have been phenomenal on jag, in high res.

Recently I was watching some vids on one Japanese arcade (name escapes me at the moment, but it had like 4 RISC chips and 3-4 additional other processors) that was using laserdisc and all games were flatshaded. They look absolutely stunning in high res. Even by today's standards - because there's no pixelated grainy textures, everything is nice, sharp and clean. Honestly, I wouldn't even go for Gourard (though, that's more passable as simple texturing) - but that's personal preference....

That reminds me of the old "Namco System 21" arcades... Not to throw this topic off of the subject, but I looked at so many hardware's in comparison to the Jaguar hardware including the old graphic workstations in the 80s. Archive.org has some stuff there concerning old workstations and some 3D arcade hardware if you're lucky. If Gourard shading can be pulled off pass 30fps I'll take it; I have no problems with shading except for the slowdowns. As for the grainy texture, I'm wondering if the OP can blur out the graininess using anti-aliasing...? The OP can do effects like fog and other effects that involve scan-line based effects. The N64 blurred textures by default, which was one of the highlights of that system, but I think the Jags GPU is capable of doing similar but very differently due to there being no video ram; some things would have to be worked out with the other processors especially the DSP having the highest priority and access to main ram at full speed.

 

I don't know what happened to Saturn, and while I haven't seen PS1's SDK, I find it more sad (than, say, jag) that Saturn didn't stand a chance against PS1...

 

 

You can thank the Silicon Graphics technology used in the PS1 and N64 for that... Sega learned from that experience when they released "Scud Racer" in the arcades around 96 using the "Lockheed Martin Real3D/PRO-1000" technology. Those kind of hardware's were used in the auto industry, graphic workstations and military application before it made its way into the gaming world so the shading techniques were tried and true versus the Jaguar hardware that was an in house Atari exclusive hardware in the making; not to take away from the Jags ability, I think all of those tricks the other hardware do can be done on the Jag also.

 

 

Object processor is a cunning way to not need to keep updating a frame buffer to display graphics on the screen.  You cannot display anything on a jag (beyond changing the screen colour) without using the Object processor.

@LinkoVitch

Right... The OP feeds lines into the video chip to be displayed while preparing another line for the video chip display to screen. I'm hoping to one day capitalize on it's ability to change screen color to pull off some other effects in 3D beyond gouraud shading, which I know that's what the blitter is responsible for... I would like to see the OP ability to change screen colors on the fly more cleverly used to pull off effects like smoke and fog as well as some other stuff to make 3D objects look better.

What do you mean "keep refreshing the phrase" ? that makes no sense at all??

The simplest way to keep displaying the same object list each frame is to generate it once and then just copy it back over the list after the OP has processed it.  A simple blitter copy will do it lightning fast, but you can even do it using the 68K depending on the size of the list.

 

 

Agreed... That was my first thought when I read that... The Blitter is one big copy machine that can be used for more than just graphics; I would like to use the Blitter to do math for graphical purposes if possible.



#10 philipj OFFLINE  

philipj

    Moonsweeper

  • 477 posts
  • Location:Birmingham, Alabama

Posted Mon Mar 19, 2018 12:54 AM

Not familiar with NeoGeo's specs, so can't comment on that one. I wouldn't say Atari Karts is a good example of jag's strengths - its framerate looks slightly better than Supercross, but is far from something I would voluntarily endure for more than 10 minutes.

 

Then again, if the said person perceived jag's wolf as 60-fps-no-framedrop-kind-of game, then I guess Atari Karts could work for them :lol:

 

Ok... Here's a post of two racing games; one is "Riding Hero" for the Neo Geo and the other is "Super Burnout" for the Atari Jaguar. Both are doing similar style graphics of zooming sprites using both system sprite zooming features.  For the Atari Jag it would be the "Object Processor" doing all of the work and for the SNK, although the system has less ram than the Jag, the Neo Geo pulls off some pretty good stuff on its own merit.

 

Riding Hero for the Neo Geo

 

 

Super Burnout for the Atari Jaguar



#11 VladR OFFLINE  

VladR

    Stargunner

  • 1,424 posts
  • Location:Montana

Posted Mon Mar 19, 2018 1:26 AM

 As for the grainy texture, I'm wondering if the OP can blur out the graininess using anti-aliasing...? The OP can do effects like fog and other effects that involve scan-line based effects. The N64 blurred textures by default, which was one of the highlights of that system, but I think the Jags GPU is capable of doing similar but very differently due to there being no video ram; some things would have to be worked out with the other processors especially the DSP having the highest priority and access to main ram at full speed..

My Road-rash codepath, especially one of the last builds, has a compile-time flag, that triggers a separate codepath, that actually does bilinear filtering on the scanline, while the Blitter is busy blitting past scanline. A very good use of GPU "down-time" if you ask me :lol:

So, effectively, you are racing the beam - or more specifically - Blitter Beam :) , like on old Atari, except this time, instead of adding more colors, you are increasing image quality in a different way :)

It's not exactly straightforward, as it's all happening in 8-bit colorspace, but it's possible and viable, none the less.

 

 You can thank the Silicon Graphics technology used in the PS1 and N64 for that... Sega learned from that experience when they released "Scud Racer" in the arcades around 96 using the "Lockheed Martin Real3D/PRO-1000" technology. Those kind of hardware's were used in the auto industry, graphic workstations and military application before it made its way into the gaming world so the shading techniques were tried and true versus the Jaguar hardware that was an in house Atari exclusive hardware in the making; not to take away from the Jags ability, I think all of those tricks the other hardware do can be done on the Jag also..

We have 3 general-purpose CPUs in jag : 68000, GPU, DSP - so obviously any kind of effect can be coded on any of them (or a combination thereof). The beauty of RISCs, however, is that they're so fast, you can emulate things like modern shader language. I did just a small bit of research on this - implemented few vertex shader instructions, but I'm pretty sure I can create a compiler that will take Vertex/Pixel Shader assembler from PC and convert it to RISC code running on jag.

 

Imagine taking the huge library of all the shaders, and just recompiling them to jaguar :)

 

Granted, not all could be real-time, but plenty would, for sure. Especially the post-processing effects for things like menus or framebuffer blurring (when you hit pause) could be totally reused.

 

Oh, and water shaders - as that's what I was primarily researching, if it's possible. Yep, it is, if you combine it cleverly with OP :)

 

Matter of fact, that's what DSP is a better candidate for than GPU, as it has double amount of internal cache. The 16-bit path to RAM is irrelevant, as blitting few KB costs an entirely inconsequential amount of time, compared to the brute-force computations DSP does within its cache (a fact magnified once you opt for lower framerate) :)

 

Of course, best effect (and utilization) can be reached, if you properly multithread it - e.g. keep one shader thread on DSP, and another on GPU, with zero sync points - e.g. then they truly run 100% in parallel without waiting for each other. Certain multi-pass effects are doable this way, btw - with 68000 coordinating and directing both threads during the sync point ;)

 

 I'm hoping to one day capitalize on it's ability to change screen color to pull off some other effects in 3D beyond gouraud shading, which I know that's what the blitter is responsible for... I would like to see the OP ability to change screen colors on the fly more cleverly used to pull off effects like smoke and fog as well as some other stuff to make 3D objects look better.

Fog is very easy to do when doing flatshading, so I don't see a reason why attempt to bring OP to the picture (other than, perhaps, as a research curiosity). if you're thinking of using separate OP object to do that, then that eats additional cycles in terms of creating/updating the OP List, so you'd have to benchmark it, if it even makes sense to do, in the first place.

 

I find it very hard to imagine, OP will beat GPU in fog, as I can do distance-based fog on GPU with just few instructions. Just updating the destroyed phrase on the additional OP object costs more cycles, let alone OP doing some additional per-pixel work, which BTW, without branching objects will be consumed every scanline. it's veeery easy to destroy all system's bandwidth just with OP, if you're not super careful.

 

So, how exactly do you imagine you can do fog on OP faster than in just few cycles that it takes on GPU to compute the vertex color ?

 

About the smoke - it's just a transparent bitmap - what exactly do you want to do with it ?

 

Agreed... That was my first thought when I read that... The Blitter is one big copy machine that can be used for more than just graphics; I would like to use the Blitter to do math for graphical purposes if possible.

It's just a nuisance, that's all. it doesn't matter whether you use blitter to restore the destroyed data in the OP List, do triple buffering on OP List, or just recompute it. It's a waste, regardless, hence why it's unfortunate that there's no way on jag to display the framebuffer without using OP. All it would take, is just one stupid register with the address of the framebuffer, that's all. I refuse to think that one such register would complicate the HW design, considering the plethora of HW registers already present.

 

It would simply be great, if you could turn ObjectProcessor on and off, just like you can choose to do with GPU, DSP, 68000 or Blitter.

But you do not have that option in jag, unfortunately, you are forced to use the useless abomination, that the OP is (in terms of 3D).



#12 VladR OFFLINE  

VladR

    Stargunner

  • 1,424 posts
  • Location:Montana

Posted Mon Mar 19, 2018 1:46 AM

 

Ok... Here's a post of two racing games; one is "Riding Hero" for the Neo Geo and the other is "Super Burnout" for the Atari Jaguar. Both are doing similar style graphics of zooming sprites using both system sprite zooming features.  For the Atari Jag it would be the "Object Processor" doing all of the work and for the SNK, although the system has less ram than the Jag, the Neo Geo pulls off some pretty good stuff on its own merit.

 

Riding Hero for the Neo Geo

 

I don't have a benchmark if it's faster to do scaling via OP than Blitter (theoretically, it should, but you have to account for the OP object bandwidth, especially without branching objects).

 

But I can tell for sure, that at that resolution, those 3 scaled bitmaps (2 signs and 1 enemy bike  - most of the time) can be easily pulled off on jag via Blitter at 60 fps, even without polluting the 4 KB cache with them.

OP would only make things easier for a coder, that's all - as you'd just update the extra phrase with HSCALE/VSCALE - which is certainly easier than directing Blitter, as it just happens automagically via OP.

 

Of course, with only 3 different types of such scaled bitmaps on each track, you could -alternatively- just precompute all the mipmaps and pre-scale them at load-time (we have 2 MBs, after all), at which point there's zero performance overhead of scaling at run-time. OP, then, could just copy/paste them all over the screen (via separate objects, all pointing to same image data), if you want - e.g. something like AfterBurner (where each scaled bitmap is reused dozen times).

 

I would personally choose the flexibility of Blitter, but I can see that a lazy coder could just create a composite of the bitmaps and let system eat all the bandwidth it has :lol:

 

 

 

With Blitter, you only get a framedrop if there's too much bandwidth consumed, unlike OP, where you get an ugly glitch. So, with OP's way - it really is "My way or the highway" :lol:



#13 saboteur OFFLINE  

saboteur

    Chopper Commander

  • 109 posts

Posted Mon Mar 19, 2018 3:09 AM

Object processor is a cunning way to not need to keep updating a frame buffer to display graphics on the screen.  You cannot display anything on a jag (beyond changing the screen colour) without using the Object processor.

 

Simply the Object Processor has a list of areas of jaguar memory that contain bitmap images, be they sprites, or the output from a 3D render and displays them on the screen as per the list.  You could think of it as the sprites being cut out pictures from a magazine, being placed in order and position on the page of a scrapbook.  This is fairly analogous to how the object processor does it.  If you put a picture of a motorcycle at 0,0 then a picture of a lawn mower at 0,0 what you will get would be a motorcycle with a lawnmower overlayed on top of it :)  (There is extra magic in the OP allowing you to handle things like transparency of pixels if you wish, so not everything has a solid background colour unless you want it to).

 

 

Best explanation EVER. should be pinned in the (RB+) programming section for idiots like me to refer to.



#14 sh3-rg OFFLINE  

sh3-rg

    River Patroller

  • 3,388 posts
  • doge + tie = dothemath
  • Location:BOLTON, England

Posted Mon Mar 19, 2018 3:38 AM

Best explanation EVER. should be pinned in the (RB+) programming section for idiots like me to refer to.


To be fair, that's essentially the same kind of analogy as mine when I said it was a collage... but the rb+ website/docs I made seem to be offline/goned and I can't check that. I have it locally on an old laptop, I'm going to work on it again and put it somewhere it won't be lost and forgotten.

#15 LinkoVitch ONLINE  

LinkoVitch

    River Patroller

  • 2,504 posts
  • Location:Manchester UK

Posted Mon Mar 19, 2018 3:46 AM


@LinkoVitch

Right... The OP feeds lines into the video chip to be displayed while preparing another line for the video chip display to screen. I'm hoping to one day capitalize on it's ability to change screen color to pull off some other effects in 3D beyond gouraud shading, which I know that's what the blitter is responsible for... I would like to see the OP ability to change screen colors on the fly more cleverly used to pull off effects like smoke and fog as well as some other stuff to make 3D objects look better.

 

 

Agreed... That was my first thought when I read that... The Blitter is one big copy machine that can be used for more than just graphics; I would like to use the Blitter to do math for graphical purposes if possible.

 

The OP is fairly tightly coupled to the video DAC IIRC, there's not really a "Video Chip" as such, just a few timers to sort out screen size and pixel size etc.  But yeah the OP pulls the various scan lines of data from RAM and builds them into the scanline buffer it has, which you can edit directly if you wish.  There are GPU interrupt objects that you can embed in the OP list but I believe these are buggy and also can be a significant hit to speed.  You have to remember that that scanline needs to be completed and ready to ship in a very short amount of time (240 scanlines in 0.01666 seconds  so about 0.000069 seconds per scanline :)  that's not a lot of time.)

OP can do a very limited number of fun things with the image it is rendering, it does have a Read Modify Write mode, but that can be quite costly in terms of bandwidth compared to a simple read.  Can be used for some cool effects though.  Actual processing of images is pretty limited, it cannot do any real alpha transparency stuff, so blending colours beyond RMW isn't possible.  It really is just a device to layout a screen.

I was going to quote Vladr and point out his many uses of numberwang and inaccuracies, but there were just so so many of them I simply couldn't bring myself to do it.

1) Shaders : HAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHA
2) Compiled PC shaders : HAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHA *Wipes tear* HAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHA

Sorry, but best joke I have heard.. As he's such a HUGE fan of numberwang, lets do a little...

 

Assuming a buffer of 320x200 that's 64000 pixels... shaders run on each pixel, usually on a CPU with multiple cores to maximize "MULTIPROCESSING" (not not threading), each of these cores is designed to do maths.. now the Jag has none of these, and is many years older than systems that do.. but anyway, back to the numberwang.. Assuming a 320x200 screen and a 30FPS refresh, with the GPU running at 25MHz and each instruction taking 1 tick to complete (it doesn't really, but you know we're in numberwang land)... that gives us 1 second / 30 = 0.0333 (recurring) seconds per frame.. so roughly 833333 instructions available to complete a frame.. of which these need to be spread across the 64000 pixels, so  13 instructions roughly per pixel....  That's ignoring ANY fetching of data from RAM and most definitely any writing of data to RAM.  As the system would need to run sequentially you could possibly get a few ticks setup pointing registers at the right places in the buffer to start, but still no time to fetch, process and write back anything of use.  Certainly not perform 3D vector maths, colouring, SIN calcs etc. 

Use multiple processors you say (probably), sure so you might up that to 26 instructions per pixel, but you are hitting the same memory bus as everything else, also the DSP has only a 16bit extension bus, not full 32 bit.  The OP still needs time to read this beautifully crafted frame buffer.. you don't have time to use double buffering so you are going to suffer tearing and distortion, you have no time or bus left to clear a new buffer so your shader also needs to blank out before it starts writing anything.  You can't really use the 68K because it's woefully underpowered for this kind of work, and eats even more bus.

 

3) Simple Frame buffer pointer for OP

 

This is the most sensible thing you have written.. It *might* have been a nice feature to be able to point the OP at a section of RAM and for it to treat that as a background frame and then have the OP list decorate it optionally, this would indeed have been not a huge limitation on the silicon, and quite sensible (given Atari's obsession with 3D).

4) DSP 16bit being irrelevant

 

It takes twice as much bus to get stuff in and out of the DSP, this does matter.  Yes it can do more in it's cache (which also has to hold your code remember, so you are already eating into your 4096 (roughly) instructions worth of RAM.  You need to be able to hold all the data you need for your calculations in there and the output (several pixels worth) to mitigate the 16bit bus limit.  You also lose your matrix transformation optimisations.  Not saying you cannot use the DSP to assist with 3D, hell you can use 68K if you want to do all your 3D, I am mearly disputing the 16bit bus width not being an issue.


 



#16 LinkoVitch ONLINE  

LinkoVitch

    River Patroller

  • 2,504 posts
  • Location:Manchester UK

Posted Mon Mar 19, 2018 3:49 AM

To be fair, that's essentially the same kind of analogy as mine when I said it was a collage... but the rb+ website/docs I made seem to be offline/goned and I can't check that. I have it locally on an old laptop, I'm going to work on it again and put it somewhere it won't be lost and forgotten.

 

Same as yours??? I'LL SEE YOU IN COURT!!!!!!!!11111oneoneone

 

:D

 

Scrapbook idea just popped to mind as it's exactly what it's like isn't it :)  Albeit a super twitchy scrapbook that sometimes snaps closed on your fingers and refused to open because you didn't arrange the magical doilies correctly under the plate of biscuits that have to be exactly at the right position next to it for it to work :)



#17 CyranoJ ONLINE  

CyranoJ

    Quadrunner

  • 5,333 posts
  • RAPTOR in LOCAL
  • Location:Adelaide, SA

Posted Mon Mar 19, 2018 4:06 AM

Linko, I can't work any of that out without knowing how many on/off ramps, the distance to work, or how long I'm going to use my hand to type about having a bad hand.



#18 VladR OFFLINE  

VladR

    Stargunner

  • 1,424 posts
  • Location:Montana

Posted Mon Mar 19, 2018 5:32 AM

As always, a local group of technically completely clueless and as is becoming more obvious -mentally challenged jokers, is playing out their standup comedy performances.


Quick ! Run to mommy to ban me from this thread too, as obviously that's the only semi-coherent semblance of an attempt at an idea, you can dare to begin to hope to formulate :lol:

#19 sh3-rg OFFLINE  

sh3-rg

    River Patroller

  • 3,388 posts
  • doge + tie = dothemath
  • Location:BOLTON, England

Posted Mon Mar 19, 2018 6:03 AM

Linko, I can't work any of that out without knowing how many on/off ramps, the distance to work, or how long I'm going to use my hand to type about having a bad hand.


So we're talking multiple European-standard weekends here?

#20 philipj OFFLINE  

philipj

    Moonsweeper

  • 477 posts
  • Location:Birmingham, Alabama

Posted Mon Mar 19, 2018 3:53 PM

Of course, with only 3 different types of such scaled bitmaps on each track, you could -alternatively- just precompute all the mipmaps and pre-scale them at load-time (we have 2 MBs, after all), at which point there's zero performance overhead of scaling at run-time. OP, then, could just copy/paste them all over the screen (via separate objects, all pointing to same image data), if you want - e.g. something like AfterBurner (where each scaled bitmap is reused dozen times).

 

I would personally choose the flexibility of Blitter, but I can see that a lazy coder could just create a composite of the bitmaps and let system eat all the bandwidth it has :lol:

@VladR

I wouldn't have consider composition stuff to be that of a lazy coder... I think my point is being missed a little here when I mention the word composition. First off let me make an important note here my philosophy for Jag development is to "Think really Big, but Start Small." That's really the driving force behind a lot of my ideas; I got the phrase from a solar power company and become somewhat of an epiphany that stuck with me for years. When I mention the word composite, I'm thinking in terms of the Jag doing composition on a very small scale in a way that's feasible. You probably can't do shaders in the sense that a modern PC or game console can do, however, you might can trick some of the more simpler Jag mechanics to do some similar using low depth color, low poly objects or just simple bounding boxes in main for the DSP or M68K to manage until an actually reaches the GPU for fast rendering. Large data sets doesn't have to occupy physical memory, but rather a representation of such data can be present in the machine before an actual rendering takes place. We can't make the Jag get on the PS4 level-it just not built that way, but we can get on the Jaguars level where data and throughput can be very small and quickly manageable for real-time purposes. It all sounds good to me, but I know the reality of Jags complexity so I try and keep and open mind these days.That's also why I say in other topics that there need to be a radical re-thinking of how to program the Jaguar because system is what it is and any other 3D pipeline that came after it wasn't designed to be put on the Jag due better system architectures.

 

It takes twice as much bus to get stuff in and out of the DSP, this does matter.  Yes it can do more in it's cache (which also has to hold your code remember, so you are already eating into your 4096 (roughly) instructions worth of RAM.  You need to be able to hold all the data you need for your calculations in there and the output (several pixels worth) to mitigate the 16bit bus limit.  You also lose your matrix transformation optimisations.  Not saying you cannot use the DSP to assist with 3D, hell you can use 68K if you want to do all your 3D, I am mearly disputing the 16bit bus width not being an issue.

 

 

@ LinkoVitch

The same answer I gave VladR can also somewhat apply and I hope it give better understanding to a lot of my comments.

 

There's a couple of old 3D programs that I use to work in in the late 90s called "Impulse Imagine 3D, which was in direct competition with the original "3D Studio" ; the other program is called "Vistapro" which was a landscape program that seems to use fractals to render landscape, which I'm going to post a couple of stuff I did in Vistapro (and showboat a little :grin:). Both of these programs were out during the Jags hay day and were both consider ray-tracer based programs on the Amiga, but my versions was on IBM pc. These programs would start out as low poly bounding boxes or in Vistapro's case as low level fractals before anything was every rendered. If you think about the game Doom, it started as a low key binary tree set that resided in ram and didn't actually get rendered until the GPU actually picked it up via one of the Jag processors right? Well the same principal can apply for a real-time 3D render-er; the key would be the consolidation of the 3D sets for fast low bit depth recovery at the right time.

 

Now the videos I made back in 2006, actually for a grade in a couple of college classes I was doing at the time, started out as low fractal height maps that looks very similar to "Rescue on Fractalus" for the Atari 8bit systems. Not to through this off topic, but to get a point across, Before I rendered the actual video you see, I would preview it in a height map fractal view designed for real-time viewing on an IBM 386 or higher. Not that I saying we should use fractals exclusively, what I'm saying is that the same principal low level fast small data set can be handled by the other processors on the Jaguar, before it actually reaches the GPU, Blitter, and OP for real-time polygon rendering or some other fast rendering scheme... Something like THAT should be the design philosophy behind the technicalities.

 

Native Land Treasure: I rendered this in "VistaPro" for Windows, compose the

music using "Acid" beat loops and edited everything in Adobe Premier back in

2005 or 06.

 

A high res "VistaPro" rendering I also did in 2006.

 

Sigh. Great memories... :music:


Edited by philipj, Mon Mar 19, 2018 4:00 PM.


#21 philipj OFFLINE  

philipj

    Moonsweeper

  • 477 posts
  • Location:Birmingham, Alabama

Posted Mon Mar 19, 2018 5:55 PM

Ok here's an example of a low level view of Vistapro animation sequence before it's rendered on an Amiaga... It runs much faster on an IBM machine. Just an illustration to bat in a point.

 

https://youtu.be/xd0L4TbHPm4?t=3m31s


Edited by philipj, Mon Mar 19, 2018 5:56 PM.


#22 LinkoVitch ONLINE  

LinkoVitch

    River Patroller

  • 2,504 posts
  • Location:Manchester UK

Posted Tue Mar 20, 2018 3:35 AM

@ LinkoVitch

The same answer I gave VladR can also somewhat apply and I hope it give better understanding to a lot of my comments.

 

There's a couple of old 3D programs that I use to work in in the late 90s called "Impulse Imagine 3D, which was in direct competition with the original "3D Studio" ; the other program is called "Vistapro" which was a landscape program that seems to use fractals to render landscape, which I'm going to post a couple of stuff I did in Vistapro (and showboat a little :grin:). Both of these programs were out during the Jags hay day and were both consider ray-tracer based programs on the Amiga, but my versions was on IBM pc. These programs would start out as low poly bounding boxes or in Vistapro's case as low level fractals before anything was every rendered. If you think about the game Doom, it started as a low key binary tree set that resided in ram and didn't actually get rendered until the GPU actually picked it up via one of the Jag processors right? Well the same principal can apply for a real-time 3D render-er; the key would be the consolidation of the 3D sets for fast low bit depth recovery at the right time.

Sigh. Great memories... :music:

 

I am not sure what "other" processor you are thinking of here before the GPU, Op and Blitter.. the only 2 left are the 68K and the DSP.  If you are doing any heavy processing that is going to require memory access it would probably be best to use the GPU as it has a full 32bit wide feed in and out of main RAM, DSP has a limited 16 bit, and also if you have the DSP working away you now have no sound, also it can be useful to put things like reading the pads in the DSP too.

 

Using the DSP to offload "some" of the work from the GPU is quite common, leave the GPU to do the majority of the gruntwork whilst the DSP does a bit here and there, but also, there is only one bus, to which only one CPU can access at any time.  The 2 RISC cores can run inside their caches without needing the main bus, but every time they need a bit more data in or out, that's a bus hit, or a wait to hit the bus if another higher priority CPU has it at the time.

 

Jag only has 5 processors, and of those only 3 are "programmable" (68K, DSP, GPU).  If you are passing data between the DSP and GPU that also needs the bus, so you are going to prevent any main RAM access during that time too. 

 

I remember the old Vista programs from back in the day :D  Indeed they were good times :D



#23 Shamus OFFLINE  

Shamus

    Dragonstomper

  • 659 posts
  • Moo, er, Roar!
  • Location:Ur-th

Posted Thu Mar 22, 2018 6:45 AM

Techinically, the OP is programmable too. It has a very small instruction set, and is wonky as all get-out, but it's still a legit processor. :)

#24 LinkoVitch ONLINE  

LinkoVitch

    River Patroller

  • 2,504 posts
  • Location:Manchester UK

Posted Thu Mar 22, 2018 2:14 PM

Techinically, the OP is programmable too. It has a very small instruction set, and is wonky as all get-out, but it's still a legit processor. :)

 

I'm not disputing it's a processor and 'programmable' but you know what I mean :)



#25 philipj OFFLINE  

philipj

    Moonsweeper

  • 477 posts
  • Location:Birmingham, Alabama

Posted Sat Mar 24, 2018 1:45 AM

OK... Here's a good example of a low poly engine running an X68000 computer... It's very fast with no surfaces, just straight wire-frames running decently fast on a 68k processor. For the Jag, I would use low poly using the DSP, or the M68k, or both for speed and let the GPU render a high-poly version to screen using a type of scan-line render-er using the OP. Or if I could just fake the living daylights out of 3D using 2.5D methods at the GPU level based on the real low-poly 3D stuff, I'd settle for that; the Jaguar's strength is in it's 2.5D rendering. I've always been a firm believer in that fact. That's my vision a 3D engine for the Jaguar in a nutshell.

 


Edited by philipj, Sat Mar 24, 2018 1:45 AM.





0 user(s) are browsing this forum

0 members, 0 guests, 0 anonymous users