Jump to content
SainT

Upcoming Jaguar Game Drive Cartridge

Recommended Posts

Hell, if only so I don't have to swap cartridges in and out to play them. I've bought the Atari Max cartridge for my 5200, but still wished I had a ROM file for my Adventure II cartridge so I wouldn't have to swap cartridges.

 

I guess in the digital age of being able to have thousands of games in my Steam Library, where I can just click a button to download a game I want to play has spoiled me :P I'd definitely play my Jag more if I had the SD Cart for it.

 

I trust that the CD Support will come eventually. Great minds and all that.

  • Like 3

Share this post


Link to post
Share on other sites

Having the CD capabilities will be pretty amazing for many of those out there who do not have access to one due to rarity and price. Luckily mine works just fine right now but they can be finnicky and with use may break down. Keep up the good work and this is still a promising project in the works. It'll be done when it's done. Seems like a lot of newby Jag owners jumping into the scene that will be able to benefit greatly from this cart once it is ready. Still good for the veterans who have been around the Jag for a long time as well.

  • Like 1

Share this post


Link to post
Share on other sites

Rather curious. Waiting to ensure the GPU is not active (RISCGO & *G_CTRL == 0) before using the GPU sometimes results in a hang. I.e. the GPU is allegedly active and the code is polling for it to stop.

 

Although I stop the GPU and test the GPU is stopped after every use of the GPU in the code I'm running, so this state is actually impossible. If I ignore this and just copy and run the bit of code I want, it all seems to function.

 

Is this "normal" to see, or is perhaps an interrupt request causing the RISCGO to become set? I'm wondering if this is related to the other weird behavior I'm seeing with the CD reads and dodgy RISC access. The GPU getting into some odd running / interrupt ack state while idle...

 

I'm a bit unique here in having an external interrupt source active, so any weird stuff could be happening...

  • Like 1

Share this post


Link to post
Share on other sites

I had fun with stopping the DSP, Shamus thankfully pointed out the error of my ways. Originally I was just stuffing 0 into D_CTRL which in my mind should stop the DSP dead. It more often than not caused it to lock up.

 

The solution I now have in place (thanks Shamus :D ) is a semaphore that signals to all interrupts that a stop is requested. When each interrupt fires and sees this semaphore set it disables itself from the D_FLAGS register and exits, also setting a flag to say it has stopped. Once ALL interrupts have confirmed they have stopped, THEN the main loop stuffs 0 into D_CTRL and does a bunch of nops just to be sure it doesn't get carried away and run some extra code for lols :)

 

I suspect that what was happening in my case was an interrupt was firing after I had set D_CTRL to 0, and essentially running the core in a zombie state, so it ate it's own brain and just sat there jammed up.

Hopefully that is of some use to you squire?

Obviously for GPU you're going to want to replace those D_ for G_ ;) :D :D

  • Like 6

Share this post


Link to post
Share on other sites

Ok, that sounds interesting, it's certainly related. I'll dig deeper into my external interrupt generation...

  • Like 5

Share this post


Link to post
Share on other sites

I routinely stop & restart both GPU and DSP, in the middle of the frame, while the other two cores (68000 and either DSP or GPU) are finishing their respective stages of the 3d pipeline. My engine supports both dynamic halt and semaphore wait (depending on pipeline stage, I choose the more appropriate option) with all possible combinations between the three processors (well, except 68000, there's never a rational reason to stop that one, despite what the plebs thinks, and don't make me link to XKCD on "somebody's wrong on the internet" :- ) ).

 

Most certainly, it's not just a question of G/D_CTRL - frankly that notion is ridiculously naïve, and I would expect only somebody who is just starting out with jag to think that that's enough. It is,however, an entertaining notion, I definitely won't argue that :lol:

 

The code that handles it, in a safe way, is about 3 pages long, IIRC.

 

 

And by safe I mean, running it without crash/hang in a loop, 24/7, for 7 days straight (while the scripting component is auto-playing through first 25 levels - resulting in way over 100,000 such stops/restarts in that timeframe). Of course, it doesn't have to be 7 days, that's just my personal unit-testing preference (a 168 hours stress-test) that makes me feel good about the stability of the code, so -obviously- YMMV...

  • Like 1

Share this post


Link to post
Share on other sites

Most certainly, it's not just a question of G/D_CTRL - frankly that notion is ridiculously naïve, and I would expect only somebody who is just starting out with jag to think that that's enough. It is,however, an entertaining notion, I definitely won't argue that :lol:

VladR

'Making friends since 2008'

  • Like 9

Share this post


Link to post
Share on other sites

Ok, that sounds interesting, it's certainly related. I'll dig deeper into my external interrupt generation...

 

IIRC, Atari wrote/said you should loop. In BJL I have this macro:

 

MACRO STOP_GPU

movei $f02114,r0

load (r0),r1

bclr #0,r1

.\sgwait store r1,(r0)

jr .\sgwait

nop

ENDM

 

And I never had problems stopping either GPU or DSP.

  • Like 6

Share this post


Link to post
Share on other sites

Did you have any interrupts running? and/or have any problems subsequently copying new code into the processors RAM once you stopped it in this fashion?

 

Problem I saw was when you then tried to load code back into the supposedly stopped processor it would sometimes lock up.

Share this post


Link to post
Share on other sites

THEN the main loop stuffs 0 into D_CTRL and does a bunch of nops just to be sure it doesn't get carried away and run some extra code for lols :)

😅

Edited by JagChris

Share this post


Link to post
Share on other sites

I routinely stop & restart both GPU and DSP, in the middle of the frame, while the other two cores (68000 and either DSP or GPU) are finishing their respective stages of the 3d pipeline. My engine supports both dynamic halt and semaphore wait (depending on pipeline stage, I choose the more appropriate option) with all possible combinations between the three processors (well, except 68000, there's never a rational reason to stop that one, despite what the plebs thinks, and don't make me link to XKCD on "somebody's wrong on the internet" :- ) ).

 

Most certainly, it's not just a question of G/D_CTRL - frankly that notion is ridiculously naïve, and I would expect only somebody who is just starting out with jag to think that that's enough. It is,however, an entertaining notion, I definitely won't argue that :lol:

 

The code that handles it, in a safe way, is about 3 pages long, IIRC.

 

 

And by safe I mean, running it without crash/hang in a loop, 24/7, for 7 days straight (while the scripting component is auto-playing through first 25 levels - resulting in way over 100,000 such stops/restarts in that timeframe). Of course, it doesn't have to be 7 days, that's just my personal unit-testing preference (a 168 hours stress-test) that makes me feel good about the stability of the code, so -obviously- YMMV...

 

And just how big are these "3 pages" you speak about?

Share this post


Link to post
Share on other sites

The code that handles it, in a safe way, is about 3 pages long, IIRC.

 

*ROTFL* 3 pages of code for stopping the GPU? Well, which font size? How many lines? And what means "IIRC", why not just take a look and share?

 

Atari told us (back in time) to stop the GPU/DSP in a loop. That works.

But then, why should I want to stop the GPU? In Tetris the GPU runs constantly and it reloads new code. But then, I do not run hours of scripted tests ...

  • Like 3

Share this post


Link to post
Share on other sites

I've heard that the game save feature for the different games must be programmed for the specific save chip that is on that specific cartridge.

 

For cartridges that have different style or model of save chips how will this be handled? Or has this already been touched on and I overlooked it?

Edited by JagChris

Share this post


Link to post
Share on other sites

I can't wait for Vlad's project to hit a snag and for him to ask for help. :D

 

Would this the project he said he'd show in September that he never did and still hasn't. He's far too busy making poor attempts to bait people who actually produce things for the system and misusing terms like "Multi-Threading". It's OK he's actually pretty new to coding, I remember back when he first appeared and it took a few of us quite a few attempts to explain the concept and purpose of double buffering to him.

 

As much fun and as easy as it is to poke fun at Vladr, his poor attempts to get a rise, his attempts to sound like he knows what he's doing, numberwang and his blatant non-starting projects, and failure to produce anything. It's probably better we all just ignore him like you would that irritating child eating its own snot whilst impersonating Pikachu and keep this much more important thread on it's rails.

 

Vladr's had his pat on the head of attention now, lets wipe our hands and perhaps keep this thread about all the wonderful things SD Cart.

  • Like 13

Share this post


Link to post
Share on other sites

I can confirm that both the dsp and gpu need to be stopped on a per interrupt level before halting them via the x_go flags.

 

Issue only occurres when interrupts are flying. With direct routines, no need.

  • Like 5

Share this post


Link to post
Share on other sites

buuuuuuurn

 

 

post-19882-0-88529000-1548033287.jpg

 

 

As much as I hate to get back to the topic, big portion of those 3 pages of code goes also towards the syncing between all three processors.

Share this post


Link to post
Share on other sites

 

 

"Highlander" working has to be a negative, right?

 

 

Those would've been fighting words with Sam Tramiel back in 1995. You should've seen his eyes pop out in anger at me at the Shareholder's Meeting when I questioned him about how much of a true Highlander fan he was for having licensed the rights to the relatively unknown-and-unloved animated series instead of the mega popular live-action tv series at the time...

 

Good times! :)

Edited by Lynxpro
  • Like 7

Share this post


Link to post
Share on other sites

I can confirm that both the dsp and gpu need to be stopped on a per interrupt level before halting them via the x_go flags.

 

Issue only occurres when interrupts are flying. With direct routines, no need.

 

One thing is: GPU/DSP shall stop itself.

 

Here some code from Atari:

;
; now kill the GPU
;
    movei    #G_CTRL,r0
    moveq    #2,r1
.die:
    store    r1,(r0)
    nop
    nop
    jr    .die
    nop


  • Like 3

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...

  • Recently Browsing   0 members

    No registered users viewing this page.

×
×
  • Create New...