Jump to content
IGNORED

Now available IntyBASIC compiler v0.8 ! :)


nanochess

Recommended Posts

Thanks for the info, by the way!

 

A quick question concerning DEFINE statements:

 

What does DEFINE do, exactly, that limits it to one DEFINE per WAIT? Doesn't it just poke a few bytes to memory? Could one work around DEFINE to set a number of random cards quickly by POKEing memory directly? If so, where would those memory locations be? At the beginning of the graphics space?

Link to comment
Share on other sites

Thanks for the info, by the way!

 

A quick question concerning DEFINE statements:

 

What does DEFINE do, exactly, that limits it to one DEFINE per WAIT? Doesn't it just poke a few bytes to memory? Could one work around DEFINE to set a number of random cards quickly by POKEing memory directly? If so, where would those memory locations be? At the beginning of the graphics space?

 

No way. The Intellivision doesn't allow to define GRAM outside of vertical blank, so the DEFINE "pipelines" the update request to be done at the video interrupt.

 

Because of small memory, an entry of three numbers is preserved (source, target and size). DEFINE writes those locations and the interrupt handles the work (check intybasic_epilogue.asm or the generated listing for locations)

Link to comment
Share on other sites

Unfortunate, but gotcha.

 

On a random note, does IntyBasic support floats? Apparently it supports signed and unsigned (though I'm not completely clear on how that works, either).

 

Comparisons are signed, all other operators are unsigned. Most of times you can work without trouble with signed values.

 

No float support. I would have to write a big floating point library... :sleep:

Link to comment
Share on other sites

I tend to implement only things that are useful in the immediate future for 90% of games. The fixed-point library would have a too narrow application currently.

In arcade games, fractional movement is very important for smooth moving graphics and it minimises RAM usage. This is the method I use in my games :-

 

http://atariage.com/forums/topic/209764-more-efficient-sub-pixel-movement/

 

If you don't use fractions, you have to keep a number of frames per movement counter for each moving object as well as its X and Y positions. Decrementing a variable in RAM just to see if something should be moved is extra overhead on top of doing the actual movement.

 

Fractional movement also helps PAL and NTSC games move their on screen objects at the same speed.

 

I think adding "cheating floating point" would be a good addition to IntyBASIC. So if you wrote :-

Y1=Y1+1.5
It would be converted to the following assembler (probably :P) :-

mvi Y1, r0
addi #$8001, r0 ; +1.5 pixels
adcr r0
mvo r0, Y1
So for movement in Y, the fractional part could be made up of 1/2, 1/4, 1/8 and 1/16 e.g.

 

+1.50 = $8001

+1.75 = $C001

+0.0625 = $1000

...

 

For movement in X, the fractional part could be made up of 1/2, 1/4, 1/8, 1/16 and 1/32.

 

If you decide to add the changes to IntyBASIC, then ideally if the IntyBASIC parser can't match a given cheating floating point number to a supported fraction it would put up a warning and say what value it has been be converted to.

  • Like 5
Link to comment
Share on other sites

Hmm... I seem to be making some mistake with Color Stack mode. I define the mode like so:

 

MODE 0, 0, 1, 4, 15

 

... And then I print something:

 

PRINT AT 101 COLOR 2, "Hello, World!"

 

... But the background becomes a different color than those specified by the stack. It seems to be directly tied to where I'm printing at, as 85, 101, and 117 all create a solid orange background, for instance.

 

What am I missing here?

 

EDIT: Apparently the problem was with using a PRINT statement before a WAIT after setting the MODE.

Edited by Cybearg
Link to comment
Share on other sites

Hmm... I seem to be making some mistake with Color Stack mode. I define the mode like so:

 

MODE 0, 0, 1, 4, 15

 

... And then I print something:

 

PRINT AT 101 COLOR 2, "Hello, World!"

 

... But the background becomes a different color than those specified by the stack. It seems to be directly tied to where I'm printing at, as 85, 101, and 117 all create a solid orange background, for instance.

 

What am I missing here?

 

EDIT: Apparently the problem was with using a PRINT statement before a WAIT after setting the MODE.

 

MODE is also pipelined for update in the next WAIT.

Link to comment
Share on other sites

I suppose, as an alternative to floats, I could use 16-bit integers divided by 100 for X/Y coordinates, allowing me to use the 0-99 range as a decimal?

 

Dividing by 100 would be a bad idea. You're better off picking a power of 2, as that "division" can be handled entirely with shifts. If you pick 256, then dividing by 256 is just a SWAP and an AND. Thus your fractional values go 0 to 255.

 

nanochess: If you don't want to implement full fixed point support, at least recognizing "x / 256" can be a SWAP and an AND would give you an 80% solution, most likely, and it's somewhat aligned with the techniques GroovyBee and I were talking about in that thread he linked above.

 

(Edit: If it already does that, then great! Get the word out that this is one way to get "fixed point." I have to admit, I haven't downloaded IntyBASIC 0.8 to try it.)

Edited by intvnut
  • Like 2
Link to comment
Share on other sites

 

nanochess: If you don't want to implement full fixed point support, at least recognizing "x / 256" can be a SWAP and an AND would give you an 80% solution, most likely, and it's somewhat aligned with the techniques GroovyBee and I were talking about in that thread he linked above.

 

(Edit: If it already does that, then great! Get the word out that this is one way to get "fixed point." I have to admit, I haven't downloaded IntyBASIC 0.8 to try it.)

 

It's a good idea, just I've added it to my notes for future development :)

  • Like 1
Link to comment
Share on other sites

OK, so first off I just have to pass on a HUGE thanks to nanochess for INTYbasic. What a ridiculously handy tool! I've already managed to prototype out a working game in a matter of hours (and it's been probably 10 years since I've coded anything serious on any platform, let alone BASIC or ASM). It just saves so much effort and thinking. Also holy crap thanks for including &binary support in 0.8. My brain thinks in terms of bitmasking everything and on a platform like this it just makes so much sense. So nice to see it laid out in the code when necessary.

 

I'm gonna have a whackload of questions as I slowly reverse-engineer everything. I've got MOBs figured out pretty well but other display elements are confusing the hell out of me. Perhaps there's a reference doc somewhere that I haven't found, but:

 

1. What does BORDER really do? The MASK allows some thicker borders on a couple of sides, but it's not a full rectangle. Not sure I understand the point of this because no matter what, it's uneven.

 

2. When you blit data using SCREEN, how exactly are the bits in the DATA area used? I've been playing around with someone's "clouds and hill" example from another thread and I can mostly figure it out with trial and error, but I'm a binary guy and I like to know what each bit represents. For SPRITE, the manual has a good description of what each bit does. Made it pretty obvious. But for SCREEN - I know that some bits are being used to set FG and BG color on a card, and other bits are being used to select the card from GROM/GRAM, but I can't quite line it up every time. They're close, but not the same, as the bit pattern for SPRITE. At least I don't think so? I'm using MODE 1 if that helps as I believe this limits the number of cards you can access - but I don't understand why the bit assignment seems so different.

  • Like 1
Link to comment
Share on other sites

2. When you blit data using SCREEN, how exactly are the bits in the DATA area used? I've been playing around with someone's "clouds and hill" example from another thread and I can mostly figure it out with trial and error, but I'm a binary guy and I like to know what each bit represents. For SPRITE, the manual has a good description of what each bit does. Made it pretty obvious. But for SCREEN - I know that some bits are being used to set FG and BG color on a card, and other bits are being used to select the card from GROM/GRAM, but I can't quite line it up every time. They're close, but not the same, as the bit pattern for SPRITE. At least I don't think so? I'm using MODE 1 if that helps as I believe this limits the number of cards you can access - but I don't understand why the bit assignment seems so different.

The manual doesn't state specifically, but it does link to the STIC documents that do. The information you're looking for is toward the bottom of that page, under "Background".

 

 

Ah, true - thanks. But it only works for scrolling in 2 directions I guess.

The scrolling always affects either the top, left, or top and left of the screen, regardless of whether it's left or right, for instance.

 

BORDER can be set to mask either the top, the left, or the top and the left, depending on your needs. Or you can just not use it.

  • Like 1
Link to comment
Share on other sites

 

1. What does BORDER really do? The MASK allows some thicker borders on a couple of sides, but it's not a full rectangle. Not sure I understand the point of this because no matter what, it's uneven.

 

2. When you blit data using SCREEN, how exactly are the bits in the DATA area used? I've been playing around with someone's "clouds and hill" example from another thread and I can mostly figure it out with trial and error, but I'm a binary guy and I like to know what each bit represents. For SPRITE, the manual has a good description of what each bit does. Made it pretty obvious. But for SCREEN - I know that some bits are being used to set FG and BG color on a card, and other bits are being used to select the card from GROM/GRAM, but I can't quite line it up every time. They're close, but not the same, as the bit pattern for SPRITE. At least I don't think so? I'm using MODE 1 if that helps as I believe this limits the number of cards you can access - but I don't understand why the bit assignment seems so different.

1. When you put a value in the horizontal or vertical delay register, it adds some blank lines to the display. For example, if I put a 3

in the horizontal delay register, it adds three lines of pixels (in the border color), to the left side of the screen , and the rest of the screen is shifted over 3 lines (and the screen stops drawing 3 lines earlier than usual, so the number of columns of pixels is unchanged.)

 

If you don't set the extended border, then the player can see these blank lines appearing on the left edge of the screen as the delay register value changes. With the extended border set, the first 8 pixel columns of the display are hidden, so the 0-7 lines added by the delay register are not seen by the player.

 

2. The Intellivision wiki can be useful for this kind of information.

 

post-14916-0-86873600-1412028709_thumb.png

 

http://wiki.intellivision.us/index.php?title=STIC

Edited by catsfolly
Link to comment
Share on other sites

In every case like this that I can think of, there's been an architectural reason. Often it's for efficiency or some other performance trick. I've always tipped my hat to programmers who just took it in stride and programmed around it. Personally it just means that I'm endlessly writing out binary strings. I suppose if I did this fulltime for a few months it'd become second nature eventually.

  • Like 1
Link to comment
Share on other sites

Yeah, they usually design the bits in the way that is most efficient from a hardware design point of view.

If they can save a few cents of production cost for a chip that they make millions of by reversing a few bits, it seems like a good tradeoff to them.

(Look at the Apple II bitmap for instance - Woz arranged it that way just to save a part or two in the design...)

 

No one cares if the programmers suffer!!!! :(

Link to comment
Share on other sites

In every case like this that I can think of, there's been an architectural reason. Often it's for efficiency or some other performance trick. I've always tipped my hat to programmers who just took it in stride and programmed around it. Personally it just means that I'm endlessly writing out binary strings. I suppose if I did this fulltime for a few months it'd become second nature eventually.

I think of hexidecimal as kind of a "shorthand" for binary. If you get confortable with hex numbers, it saves a lot of writing and typing....

Link to comment
Share on other sites

I think of hexidecimal as kind of a "shorthand" for binary. If you get confortable with hex numbers, it saves a lot of writing and typing....

 

Oh it absolutely is, in fact it's mostly why hex was created in the first place - and certainly why it has such widespread use in computing. It all just gets translated to binary in the end anyway.

 

It still doesn't help much when you to use flipped bits and such in registers. Until you start memorizing certain bit patterns.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...