# Anyone for a little benchmarking?

## Recommended Posts

1 hour ago, Maury Markowitz said:

Wild! But I think I know why...

To be honest, I'm not sure what Atari BASIC (AB herein) does here. I know for normal numeric constants, like A=A+10, the constant is parsed and replaced by it's FP representation, with a \$0E placed in front of it to indicate a constant. The FP is 40-bits, so the total length of any constant is 1-byte + 6-bytes.

I'm not sure if they do this for line numbers, I'll have to look. It would save the conversion at runtime, but since AB's maximum line number is 32767, it's 7 bytes of constant vs. maximum of 5 bytes of text, and typically maybe 3 bytes. Moreover, you have to convert the resulting FP number into the 16-bit line number anyway (which it uses, like MS), so there's still a little performance hit.

And wow, I just realized why they used a BCD math package! Its because if you tokenize a number into a "real" FP, as opposed to BCD, then when you convert it back to a string you might get line 99.999999... This doesn't really do anything, but it sure looks weird - too weird. That's the problem you were avoiding by storing the original string constant. AB doesn't have to do that, small integers will always convert back correctly.

Which brings me full circle on the whole int-FP question. If AB had 16-bit int constants, a relatively trivial upgrade, then you wouldn't need to use FP for (most) constants, and then you could replace the FP package with the MS one with few downsides. That makes for a very interesting test!

In AB I would do that by using a unique token for the original 16-bit line number, say \$0B, and then if you encounter an \$0B during runtime...

Oh, wait, this works perfectly!

... if you encounter a \$0B during runtime, you run the line-lookup routine, change the \$0B to \$0A, and change the 16-bit line number to a 16-bit line address.

I didn't think this would work because I think doing the scan-and-replace at program startup would be a high cost, especially in long programs. I note that BASIC XL does this if you use the FAST command. I concluded the table-lookup was the only solution that offered both a performance improvement and avoided startup issues.

But looking at it now, you don't have to do this replacement for every line, only the ones that actually get called. And you only know that at runtime, so just do that then. This does introduce some randomness in the performance of the GOTO, but its worst case is still MS's best case.

Of course one could combine the two approaches. When a \$0B is encountered, you do the line-number-lookup and replace it in-place. But then you also put it in a lookup table. So then when the next unconverted GOTO pointed to the same line is seen, you can convert it using the table. However, I'm not convinced this is worthwhile given the complexity it would add to the runtime.

It seems a direct comparison of the two (three?) approaches would be needed. Unfortunately, existing benches would be useless for this, because they are very small and only have one or zero GOTOs. You'd want to run something much longer with lots of typical BASIC spaghetti to know what's really going on. Super Star Trek would be great, but I can't imagine a way to benchmark it.

I am not taking the time to break that up this time, so... sorry in advance.
(incoherent rambling may follow)

I considered conversion during runtime, and to use tokens.
Part of the reason I didn't want to do it during runtime, is that I didn't want to use additional tokens, it's just another byte to parse every single time, and not doing that leaves more room for code.
If I convert the number when the line is tokenized, the slowest part is already done.
If you could just search for the INT/POINTER tokens, this would be a no brainer, but you can't since embedded ints/pointers means some of those numbers could match the tokens.  So you have to parse the line in an intelligent manner.
This is probably the best excuse for making the change during runtime, tokens or not.
Either way, you still have to scan every line converting pointers back to int line numbers before you enter/change new lines, or save the program.

An alternative (just thought this up), would be to give GOTO and GOSUB alternate tokens.  When you encounter the tokens that signify INTs follow at runtime, you convert the #s that follow, and update the tokens, then call the alternate routine for GOTO/GOSUB... or part of it anyway.
It eliminates the delay at startup, and you don't have to constantly check whether numbers are ints or pointers.

If you keep a table during conversion from line # to pointer, it would certainly speed up converting line #s to pointers.
If you convert when you type RUN, the table could reside in variable space during conversion, and could be ditched once conversion is complete.
That way it wouldn't take additional RAM.
I wouldn't bother during runtime.  The first use of a GOTO/GOSUB is only going to be a few clocks slower than before, and then it's faster after that.
The speed penalty of searching for a line number on the 6803 & 6809 isn't so bad since the index register(s) are 16 bit, and you have a 16 bit accumulator to compare the line # when you are converting to pointers.  You compare high and low bytes with a single instruction.  And this is actually a small amount of what the interpreter is doing anyway.

Converting back to ints is simple on the 6803/6809 since the pointer already points to the destination line location.
In my case (off the top of my head), you do something like this once you find a pointer token:

``` LDX  CHRPTR        ; point to current parse location (CHRPTR is direct page pointer to current parse location)
LDAB #INTTOKEN     ; get token for integer
STAB ,X            ; change token from pointer to int
LDX  1,X           ; load the line pointer (address of pointer token +1)
LDD  2,X           ; Get 16 bit int line #  (lines start with pointer to next line, followed by line number as 16 bit int)
LDX  CHRPTR        ; get current parse location pointer
STD  1,X           ; change pointer to line number
INX
INX
INX
STX  CHRPTR        ; update location to continue parsing from
```

If you ditch the token, it's slightly smaller.  This assumes the pointer is pointing to a GOTO or GOSUB token, or comma in the event of ON GOTO/GOSUB:

``` LDX CHRPTR      ; point to current parse location (CHRPTR is direct page pointer to current parse location)
LDX 1,X         ; load the line pointer
LDD 2,X         ; Get 16 bit int line #  (lines start with pointer to next line, followed by line number as 16 bit int)
LDX CHRPTR      ; get current parse location pointer
STD 1,X         ; change pointer to line number
INX
INX
INX
STX CHRPTR      ; continue parsing here```

That would need updated for alternate GOTO/GOSUB tokens.
The 6809 could use LEAX 3,X instead of multiple INX instructions, or something like that.

##### Share on other sites
7 hours ago, Maury Markowitz said:

I am curious, does overseas entail a trip eastward or westward?

Europe, multi-country... with family...

Packed-days, back-to-back... you can imagine....

##### Share on other sites
7 hours ago, Maury Markowitz said:

Sure, but they didn't do that when they published the results.

So if you want to compare the Atari to, say, the Apple II, you either have to have both machines and run the same code, or use the published result and use the original code.

As @phaeron noted, the difference is somewhere like a factor of 2x, so this is non-trivial. And yes, it would be about 2x on the Apple II as well.

There will NEVER be a factor of x2 difference... especially since I packed ALL TESTS in a SINGLE source listing, which means adding a TON of line numbers to the tests I ran, which do not exist in the individual tests (e.g. my suite is heavier!)

Just wait until I have the chance to extract Test #2 lines, and run them separately from the suite, with Altirra Basic 1.56 and Altirra FP package, as well as the very-high performance XL/OS FP package... all still contained in stock 16KB + 8KB rom space... 🙂

##### Share on other sites
On 6/27/2019 at 1:08 PM, JamesD said:

An alternative (just thought this up), would be to give GOTO and GOSUB alternate tokens.  When you encounter the tokens that signify INTs follow at runtime, you convert the #s that follow, and update the tokens, then call the alternate routine for GOTO/GOSUB... or part of it anyway.
It eliminates the delay at startup, and you don't have to constantly check whether numbers are ints or pointers.

After some thought, this is probably what I will do.  I'll divert the normal GOTO & GOSUB tokens to routines that turn the 16 bit line numbers into pointers, switch the code to alternate tokens, and then call the normal function.  I don't have to worry about ON, I'll just keep converting until I get to a : or the end of the line, skipping commas if there are any.  After that, the GOTO or GOSUB will just load the pointer as if it had already searched for the line number, and will continue normally from there.
There's no startup delay, numbers only need converted when the GOTO or GOSUB is executed, and execution is always fast after that point.
Converting back will have a delay, but that's not such a big deal as it only has to be done once before entering multiple new lines.

Atari BASIC will probably need tokens just due to the ability to use variables instead of constants.
I thought using tokens might allow me to add GOTO variable capability to MS BASIC, then I remembered that floating point doesn't represent all numbers exactly.  Do I truncate?  Round down?  Round up?  Want a reason to use BCD?  This is a good one.
If I can simply truncate numbers to represent the entire BASIC range, I may add this feature.

Changing to 16 bit integers, and pointers does present a few challenges on MS BASIC.
The code that scans through a line assumes ASCII, and isn't smart enough to skip these, so I *may* need to use tokens as well.
Maybe use tokens \$FE, and \$FF so I can just test for > \$FD or something like that.
I'm not sure yet though, \$FF was used on Extended COLOR BASIC for 2 byte tokens.  Guess I could to the same with \$FD.
MS BASIC is (was) very dependent on searching for zero as an end of line marker.
If it has to drop to the next line, it searches for the end of line marker even though the first thing on a line is the pointer to the next line.
Yup, if there is an IF at the start of the line, and the line is 200 characters long, it has to skip over 190+ characters to find the next line.
After I started keeping the pointer to the current line rather than the current line number, that went away, but it still has to skip constants searching for the next statement, and I'm sure for a few other things.
If I want to convert ALL constants, not just line numbers, I'll have to use tokens to distinguish between INT, FLOAT, and variables.
But regularly used constants can be defined at the top of the program to skip converting the type, so this isn't as big of a deal there.
I do need to see if I can speed up searching the variable table, or eliminate the search completely.

Sorry if I've hijacked the thread a bit, just thought some of this might be useful to further speed up Atari BASIC as well.
We have some of the same issues to deal with.

##### Share on other sites

Hi!,

On 6/21/2019 at 5:24 PM, Faicuai said:

NOTES:

1. Since FastBasic does NOT support arrays of reals (FP), these were first converted to STRINGS and there stored as such.

Well, if you need floating point arrays in FastBasic, just ask!

Attached is a new beta (testing) version, compiled from current sources, adding floating point arrays. This version has many changes from the current released 4.0, I should do a proper release soon.

With this version, and using the "AtariOS-800XE-Rev03-FastMath.rom" rom that you posted on another thread, the timings are:

The benchmark source is in the ATR image, the changes are like this (line 24):

Have Fun!

##### Share on other sites

Hi!

On 6/27/2019 at 9:36 AM, Maury Markowitz said:

And wow, I just realized why they used a BCD math package! Its because if you tokenize a number into a "real" FP, as opposed to BCD, then when you convert it back to a string you might get line 99.999999... This doesn't really do anything, but it sure looks weird - too weird. That's the problem you were avoiding by storing the original string constant. AB doesn't have to do that, small integers will always convert back correctly.

That is not correct. A proper implementation of binary floating point routines will always print an equal or smaller representation than the original. But yes, the implementation could have bugs

The problems in floating point are more subtle, like this in the C64:

##### Share on other sites
2 hours ago, dmsc said:

That is not correct. A proper implementation of binary floating point routines will always print an equal or smaller representation than the original.

More importantly, binary FP can always represent integers exactly.  Well, as long as the mantissa has enough bits.  A MS BASIC single on PC is 4 bytes (1 exponent, 3 mantissa) so it can perfectly represent any integer between 16777215 and -16777215.

You only lose precision when the fractional part includes factors that do not divide into your number base.  For base 2 floating point, that's anything in the thirds, fifths, sevenths, etc.  For base 10, you can at least accurately represent fifths.

##### Share on other sites
6 hours ago, ChildOfCv said:

More importantly, binary FP can always represent integers exactly.  Well, as long as the mantissa has enough bits.  A MS BASIC single on PC is 4 bytes (1 exponent, 3 mantissa) so it can perfectly represent any integer between 16777215 and -16777215.

You only lose precision when the fractional part includes factors that do not divide into your number base.  For base 2 floating point, that's anything in the thirds, fifths, sevenths, etc.  For base 10, you can at least accurately represent fifths.

Du-Oh!  Yeah, you are correct.  MS BASIC always normalizes the number so you can't just grab the first 16 bits and expect it to be the line number, but there is already a ROM call to get an unsigned 16 bit int.

##### Share on other sites
2 hours ago, JamesD said:

Du-Oh!  Yeah, you are correct.  MS BASIC always normalizes the number so you can't just grab the first 16 bits and expect it to be the line number, but there is already a ROM call to get an unsigned 16 bit int.

Well, it does attempt to use the easiest type to represent a number.  The parser begins with an int type and starts filling it with digits until it exceeds 3275 and still has another digit to add.  Then before adding that digit, it converts the number to a single and starts multiplying by 10.0 and adding the next digit.  If it exceeds 1677720 and still has more digits, it converts to double and continues.  Of course if it encounters a dot, then it immediately converts to single.  If it sees a D, then it's expecting a single with exponent specified and begins parsing the exponent.  If it sees an E, then it begins parsing the exponent to a double.

There are some ops that simply expect the type to be int and will error out if it's not.  I haven't quite analyzed that far, but I bet that when parsing a line number, it expects an int.

Also, some ops that want an unsigned integer such as peek and poke will do some fiddly stuff to see to it that you can represent them as large integers instead of signed ones.  But if performance is an issue, you're probably better off using the signed equivalent.

##### Share on other sites
17 hours ago, JamesD said:

After some thought, this is probably what I will do.  I'll divert the normal GOTO & GOSUB tokens to routines that turn the 16 bit line numbers into pointers, switch the code to alternate tokens, and then call the normal function.

This is really not necessary. Probably check the source code of Basic++ (which is a bit more readable than that of Atari Basic). GOTO and GOSUB use the generic expression parser to read its argument, i.e. they call into GetLineNumber, from there into GetInteger, from there into EvaluateExpression. The latter checks the token value, and if it is a numeric constant token, calls into GetBCD, which reads a floating point token from the expression. So, as far as expression evaluation is concerned, this needs to be modified to include an extra token - at this level - to pull to get an integer constant, plus a flag that indicates that a conversion to integer is no longer necessary. As BCD numbers are all in the range 00 to 99, the easiest I could think of is to set fp0+2 to some invalid BCD value like 0xFF and use that as an indicator for the integer to BCD conversion.

The benefit is that now that the generic expression tokenization has been modified, all statements could profit from it.

You still need to modify the parser, though. The crucial part here is "ParseNumber", which calls the ASCII to BCD conversion in the FP-package (ASCIIToBCDVector) and injects an constant-number expression into the tokenized program. This would require a test whether the number is an integer, and if so, inject another token for it.

So, it is all pretty straight-foreward, though the tokenized program would then no longer be compatible to Atari Basic, which is the reason why it wasn't done in Basic++.

##### Share on other sites
9 hours ago, ChildOfCv said:

Well, it does attempt to use the easiest type to represent a number.  The parser begins with an int type and starts filling it with digits until it exceeds 3275 and still has another digit to add.  Then before adding that digit, it converts the number to a single and starts multiplying by 10.0 and adding the next digit.  If it exceeds 1677720 and still has more digits, it converts to double and continues.  Of course if it encounters a dot, then it immediately converts to single.  If it sees a D, then it's expecting a single with exponent specified and begins parsing the exponent.  If it sees an E, then it begins parsing the exponent to a double.

There are some ops that simply expect the type to be int and will error out if it's not.  I haven't quite analyzed that far, but I bet that when parsing a line number, it expects an int.

Also, some ops that want an unsigned integer such as peek and poke will do some fiddly stuff to see to it that you can represent them as large integers instead of signed ones.  But if performance is an issue, you're probably better off using the signed equivalent.

In my case, the MC-10 (and most other 8 bit MS BASICs) only support the short floating point format.  There isn't any precision option, though calculations use an extra byte for greater precision.
There are ROM functions to convert ASCII to float, float to 16 bit signed int, float to unsigned int, and float to 8 bit int.. among other things.
It's easy enough to manipulate, but it's time consuming.  There really aren't any ints until after numbers are converted.

6 hours ago, thorfdbg said:

This is really not necessary. Probably check the source code of Basic++ (which is a bit more readable than that of Atari Basic). GOTO and GOSUB use the generic expression parser to read its argument, i.e. they call into GetLineNumber, from there into GetInteger, from there into EvaluateExpression. The latter checks the token value, and if it is a numeric constant token, calls into GetBCD, which reads a floating point token from the expression. So, as far as expression evaluation is concerned, this needs to be modified to include an extra token - at this level - to pull to get an integer constant, plus a flag that indicates that a conversion to integer is no longer necessary. As BCD numbers are all in the range 00 to 99, the easiest I could think of is to set fp0+2 to some invalid BCD value like 0xFF and use that as an indicator for the integer to BCD conversion.

The benefit is that now that the generic expression tokenization has been modified, all statements could profit from it.

You still need to modify the parser, though. The crucial part here is "ParseNumber", which calls the ASCII to BCD conversion in the FP-package (ASCIIToBCDVector) and injects an constant-number expression into the tokenized program. This would require a test whether the number is an integer, and if so, inject another token for it.

So, it is all pretty straight-foreward, though the tokenized program would then no longer be compatible to Atari Basic, which is the reason why it wasn't done in Basic++.

Well... I'm dealing with MS BASIC.

For one thing, MS BASIC stores line numbers as ASCII.
I wanted to move the ASCII to 16 bit int number conversion to the line tokenization stage, then the ASCII conversion (which I've already sped up) is done up front since it's a costly operation.  That alone would offer a decent speed improvement.
Changing int line numbers to pointers would eliminate line number searches every time GOTO or GOSUB is encountered.
Converting to ints during tokenization also means line lengths aren't altered at runtime by switching from line numbers to pointers.
I also plan to remove spaces during tokenization to simplify runtime parsing.

At runtime, whenever a token is encountered, the normal interpreter does a range check on the token, then calls an error hook if it's a bad token, or calls the function.
I enlarged to the token index table so the range check can be skipped, and it just does the jump to the function, or error hook for bad tokens.

If the normal ON/GOTO/GOSUB calls directly jump to a function that converts line numbers to pointers, it just automatically assumes it has to convert before continuing.  The line lookup is completed, the int is changed to a pointer, and the token is changed to the alternate token.  Then if possible, I'll position the code so it drops into the pointer version of GOTO or GOSUB to make the jump.  Eliminating spaces during tokenization guarantees that the 16 bit line number will immediately follow the token, so we don't need to search for the INT.
From then on, when that GOTO/GOSUB function is encountered, the function that expects pointers is called, and it just loads the address to continue executing from as the next two bytes if the flag for ON isn't set.

ON has to do a comma skip, and a check for the end of the list of line pointers, so it will be slower, but not by much since it can just test every third byte (if spaces are removed) as it searches for the line number to call.  Any syntax error should be barfed out during tokenization, and any unknown line number will be barfed out during the line number lookup.

Anyway, this change to GOTO and GOSUB could save thousands of clock cycles over the current interpreter every time one is encountered.
Everything time consuming is done during tokenization, or just once.
Well, in theory anyway.  Actual coding may be more difficult. (unintended consequences are sure to follow)

That leaves expression evaluation, and variable look ups as the big bottlenecks, and solutions aren't quite so easy.
Improving variable lookups, would speed up a lot of things.

##### Share on other sites
On 6/27/2019 at 3:01 PM, phaeron said:

No, this is not a problem for binary floating point either. In 32-bit binary floating point with a 23-bit mantissa, all integers under +/-2^24 are represented exactly.

So much for that theory. And it was such a nice theory...

On 6/27/2019 at 3:01 PM, phaeron said:

Line number caching is actually quite a lot simpler than using binary addresses: it only requires some hashtable memory, a hash lookup/insertion in the line number lookup code, and a few hooks to invalidate/clear the hash table when execution starts or the program is modified.

Agreed. Of course the table improves if the LHS is 16-bit, is that what you do on insertion?

On 6/27/2019 at 3:01 PM, phaeron said:

In practice, between common practice in Atari BASIC programs and line caching, I haven't seen line number lookup showing up very hot in the profiles in Altirra Extended BASIC.

...but that's because its cached, no? I suspect it would show up a *bit* higher in traces of the original!

##### Share on other sites
On 6/27/2019 at 3:08 PM, JamesD said:

An alternative (just thought this up), would be to give GOTO and GOSUB alternate tokens.  When you encounter the tokens that signify INTs follow at runtime, you convert the #s that follow, and update the tokens, then call the alternate routine for GOTO/GOSUB... or part of it anyway.
﻿ It eliminates the delay at startup, and you don't have to constantly check whether numbers are ints or pointers.

That's what I was thinking about. In the case of Atari BASIC you have the "this is a FP constant" token following the GOTO (well, 9 times our of 10 anyway), so when you encounter these you replace them with "this is a line number constant". This requires one more check in the parsing/eval, but if it hits (which it will that same 9 of 10), you're ready to call the GOTO. Then on SAVE you convert them all back to their original format, and on run/edit, well, I'm not 100% sure.

But as the rest of this thread has noted, the single-step-of-indirect you get by placing the address in a hash somewhat offers no discernable difference in overall performance while avoiding this overhead. So I'm no longer so hot on this concept.

Of course in Atari BASIC one might still wish to replace the 40-bit FP constant that does follow that 9-of-10 with a 16-bit version simply for RAM considerations.

On 6/27/2019 at 3:08 PM, JamesD said:

I wouldn't bother during runtime.  The first use of a GOTO/GOSUB is only going to be a few clocks slower than before, and then it's faster after that.
The speed penalty of searching for a line number on the 6803 & 6809 isn't so bad

Hmm, I think the 65C02 would also allow this by having the line number addresses directly hashed and then using that ridiculously named "indexed absolute indirect" addressing on the JSR. It would mean having to put the line-number-table starting address in the zero page, however.

##### Share on other sites
On 6/30/2019 at 12:56 AM, ChildOfCv said:

Well, it does attempt to use the easiest type to represent a number.  The ﻿parser begins with an int type and starts﻿ filling it with digits until it exceeds 3275 and still has another﻿ digit to add.

Which version of MS BASIC is this? QBASIC or later?

##### Share on other sites
2 hours ago, Maury Markowitz said:

That's what I was thinking about. In the case of Atari BASIC you have the "this is a FP constant" token following the GOTO (well, 9 times our of 10 anyway), so when you encounter these you replace them with "this is a line number constant". This requires one more check in the parsing/eval, but if it hits (which it will that same 9 of 10), you're ready to call the GOTO. Then on SAVE you convert them all back to their original format, and on run/edit, well, I'm not 100% sure.

A lot of this is subject to change based on what the code actually does/will be vs what we think.  🤪
And the "unintended consequences".

Quote

But as the rest of this thread has noted, the single-step-of-indirect you get by placing the address in a hash somewhat offers no discernable difference in overall performance while avoiding this overhead. So I'm no longer so hot on this concept.

Of course in Atari BASIC one might still wish to replace the 40-bit FP constant that does follow that 9-of-10 with a 16-bit version simply for RAM considerations.

Hmm, I think the 65C02 would also allow this by having the line number addresses directly hashed and then using that ridiculously named "indexed absolute indirect" addressing on the JSR. It would mean having to put the line-number-table starting address in the zero page, however.

This is the current code I'm dealing with.
The initial parse JSR will be ditched when I move that to the tokenization stage.

```GOTO      jsr       LE6B2               ; parse line number into BINVAL
ldx	    CURLIN              ; get address of current line
ldd       BINVAL              ; D = target line number
subd      2,x                 ; subtract current line #
bhi       LE628               ; branch if target > current
ldx       TXTTAB              ; point X to beginning of program
LE628     jsr       LE3BB               ; search for the numbered line
dex                           ; point X to previous line's terminator
stx       CHRPTR              ; set new parser position
LE630     rts                           ; resume at the target line
```

After the change to ints and pointers, the GOTO function should look something like this for the first call (GOTO INT).

```;*** GOTO 16 bit integer line number
GOTO      ldx       CHRPTR              ; point to current character
ldaa      #GOTOP              ; TOKEN for GOTOP funtion
staa      ,X                  ; update the token
ldd       1,x                 ; target line number
ldx	    CURLIN              ; get address of current line
subd      2,x                 ; subtract current line #
bge       LE628               ; branch if target >= current
ldx       TXTTAB              ; point X to beginning of program
LE628     jsr       LE3BB               ; search for the numbered line
bcs       LE630a
dex                           ; point X to previous line's terminator
pshx                          ; save the pointer to the stack
pulb                          ; put the pointer in D
pula
; could use XGDX on 6303 instead of stack
ldx       CHRPTR              ; point to current parse character
std       1,x                 ; save pointer after GOTOP
std       CHRPTR              ; set new parser position
LE630     rts                           ; resume at the target line
LE630a    ldx       CHRPTR              ; point to current parse location
ldaa      #GOTO               ; revert back to GOTO token
sta       ,x

The code should look like this for once the pointer is set.

```;*** GOTO Pointer
GOTOP     ldx       CHRPTR              ; get current parse address
stx       CHRPTR              ; set new parser position
rts                           ; resume at the target line```

The RTS will have to be replaced with different code since this was specific to the current implementation, but if this code can be moved in front of the main loop it ends up calling, it can just go away.  Three instructions for a GOTO is pretty efficient for an interpreter, and a massive improvement over what it does now.
GOSUB and GOSUBP will need to be a little longer to push the return stuff onto the stack, and GOSUBP might be able to fall through into this.

Edited by JamesD

##### Share on other sites
3 hours ago, Maury Markowitz said:

Which version of MS BASIC is this? QBASIC or later?

I've been disassembling the PC cassette basic.  Well actually the PCJr ROM, but when I want to verify something I run an online 5150 PC emulator (they don't have Jr on their list of systems).  Anyway, PC cassette basic and greater.  No doubt GW-BASIC continued the legacy.

##### Share on other sites

THEN and ELSE would have to deal with the same issues as GOTO and GOSUB.  Forgot those.
RUN linenumber is also a special case

Edited by JamesD

##### Share on other sites
12 hours ago, Maury Markowitz said:

Agreed. Of course the table improves if the LHS is 16-bit, is that what you do on insertion?

Not sure exactly what you mean by LHS, but a simple hash table on 7 bits of the line number with no collision chaining works fine in practice. It only needs to cache branch targets and not all executed lines, so collisions are rare. It also fits nicely in the 256 byte area normally occupied by the argument and operator stacks in Atari BASIC.

12 hours ago, Maury Markowitz said:

...but that's because its cached, no? I suspect it would show up a *bit* higher in traces of the original!

It does, but BASIC programs also tend minimize line number searches because of how expensive they are in the stock Atari BASIC interpreter. Caching the line pointer for FOR..NEXT loops also significantly reduces the line search load.

##### Share on other sites
8 minutes ago, phaeron said:

It does, but BASIC programs also tend minimize line number searches because of how expensive they are in the stock Atari BASIC interpreter. Caching the line pointer for FOR..NEXT loops also significantly reduces the line search load.

FWIW, MS BASIC keeps the current line number, and it uses that in FOR NEXT loops.
I changed it so it keeps a pointer to the current line, and FOR NEXT loops now use a pointer, and it skips the line search.

##### Share on other sites
On 7/3/2019 at 11:49 PM, phaeron said:

Not sure exactly what you mean﻿ by LHS﻿,

Left-Hand Side. But I meant RHS 🙂

Quote

but a simple hash table on 7 bits of the line number with no collision chaining works fine in practice. It only needs to cache branch targets and not all executed lines, so collisions are rare.

Ahhh right, it's a branch. But surely a straight GOTO can jump more that 128?

I think I'm missing something key here, can you expand on this a bit? What is the hash function?

Edited by Maury Markowitz

##### Share on other sites
On 7/4/2019 at 5:59 AM, JamesD said:

FWIW, MS BASIC keeps the current line number, and it uses that in FOR NEXT loops.
I changed it so it keeps a pointer to the current line, and FOR NEXT loops now use a pointer, and it skips the line search.

That is typically a good idea, but not always. Remember that (at least an Atari) Basic program can loop over "program insertions". It is a nice trick to  "poke 842,13" to enable the "auto-enter, write some code on the screen, including a "CONT" and then "STOP". This may insert code into the running program such that the absolute address on the stack does no longer correspond to the actual return address. Basic++ keeps a "generation counter" on the stack that is incremented each time a program leaves execution mode, and which is compared to the generation count on the stack.

##### Share on other sites
6 hours ago, Maury Markowitz said:

Left-Hand Side. But I meant RHS 🙂

Ahhh right, it's a branch. But surely a straight GOTO can jump more that 128?

I think I'm missing something key here, can you expand on this a bit? What is the hash function?

It is a function that maps a large set (such as the set of all line numbers) to a smaller set (such as 256 integers) in such a way that the risk of two line numbers receiving the same slot (a "hash collision") is minimized. Then you need some additional method to resolve such collisions, but the complexity of this method is usually not that important as collisions are supposed to be rare.

TurboBasic uses such a method. It keeps a table of 256 bytes that keeps the start addresses of the program lines. Of course there are more than 256 lines in many programs, so TurboBasic uses a quite simplistic "hash function" to derive the hash from the line number: Take the most significant byte of the line (a number between 0 and 127) times 2 defines the slot index.

Thus, if a program searches for line N, it is sufficient to compute floor(N / 256) * 2, to find the nearest line number below the line the program is looking for, and then perform a linear search from that line on. This means that the basic interpreter has to search through at most 255 lines to find the line - instead of performing the search from the start of the program.

That is not quite as effective as keeping the absolute address of a line on GOSUB or FOR/NEXT, but has the advantage that it also works for GOTO and GOSUB, and avoids the problem of having to keep the pointers on the stack consistent in case someone inserts or removes lines from the program within a subroutine or a FOR/NEXT loop.

##### Share on other sites
7 hours ago, thorfdbg said:

TurboBasic uses such a method. It keeps a table of 256 bytes that keeps the start addresses of the program lines. Of course there are more than 256 lines in many programs, so TurboBasic uses a quite simplistic "hash function" to derive the hash from the line number: Take the most significant byte of the line (a number between 0 and 127) times 2 defines the slot index.

Thus, if a program searches for line N, it is sufficient to compute floor(N / 256) * 2, to find the nearest line number below the line the program is looking for, and then perform a linear search from that line on. This means that the basic interpreter has to search through at most 255 lines to find the line - instead of performing the search from the start of the program.

This wasn't actually the algorithm I was referring to, but I wasn't aware that this is how Turbo-Basic XL's hash actually worked.

In Altirra Extended BASIC, the line cache is simply a direct-mapped cache based on the low 7 bits of the target line number, with each entry pointing to a valid line. A line lookup uses the low 7 bits to index a cache entry and checks whether the line number is a match. If it is, then the pointer is used and the line lookup terminates quickly. Otherwise, a line search is performed from either the start or the current line, and the match replaces the existing entry in the cache. The tradeoffs are different -- this method avoids needing to pre-populate the table with starts for each line group and has faster performance for densely packed lines. On the other hand, it is slower if there is aliasing on the lookup. TBXL slows down if there are a lot of lines with branches between lines 0 and 255, whereas ATXBasic slows down if you have common branch targets separated by 128*N. Even if common branch targets do alias, it'll still help if the aliasing targets aren't used at the same time -- if the program does a bunch of GOTO 1 and then GOTO 129, there will just be a second cache miss to flip the cache entry from line 1 to line 129 and then the second set of GOTOs will be optimized.

In practice, the direct mapped cache is effective. Profiles on some BASIC programs show hit rates above 99.9%, the code required to implement it is small, and it works on some common tricks like GOTO STICK(0). I could probably implement it in Altirra BASIC were it not for compatibility issues from the memory for the hashtable, since for that version I don't relocate the evaluation stacks.

##### Share on other sites
9 hours ago, thorfdbg said:

...
That is not quite as effective as keeping the absolute address of a line on GOSUB or FOR/NEXT, but has the advantage that it also works for GOTO and GOSUB, and avoids the problem of having to keep the pointers on the stack consistent in case someone inserts or removes lines from the program within a subroutine or a FOR/NEXT loop.

One nice thing about using pointers, is that if you want to push the line number to the stack instead of the pointer, you can.
You'd have to look up the line for every iteration of the for loop though.

##### Share on other sites

An alternative version used by MS-BASIC PC and GW-BASIC is that when it parses the line number as it enters a statement into program memory, it uses a "line number" token followed by the 16-bit integer line number.  When it has to look up a line while running, it replaces the token with a "line pointer" token and the 16-bit integer that follows becomes the pointer to the line instead.  Of course if you edit or save the program, it first has to revert all of the tokens to line numbers.

## Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

×   Pasted as rich text.   Paste as plain text instead

Only 75 emoji are allowed.

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.