Jump to content
IGNORED

Atari BASIC vs. Altirra 1.55 vs. TBXL 1.5 vs. FastBASIC 3.5


Recommended Posts

I was toying around with a program that had a couple of nested loops, so I decided to test execution times of a very simple nested FOR/NEXT loop. The program checks execution time of the following: FOR LP1=1 TO 10:FOR LP2=1 TO 10000:NEXT LP2:NEXT LP1. The Atari BASIC, Altirra BASIC, and TBXL versions all use the exact same code, while the TBXL version is compiled. The FastBASIC program is as close as possible, although compiled by necessity. I will follow up soon with most all flavors of Atari BASIC (Advan, BASIC XE, compiled Atari BASIC, etc) side by side.

 

FastBASIC wrecks the other three in this simple competition.

 

  • Like 1
Link to comment
Share on other sites

would have like to see Atari Microsoft Basic In that shootout...

the FP / Integer thing has always been a point of contention, Atari should have made that switchable or at list divided the use of it in regard to lines work out.

 

Gotta love FastBasic! He's working so hard on it!

Edited by _The Doctor__
  • Like 1
Link to comment
Share on other sites

Haven't we had this type of benchmark before? Actually, it is not much of a benchmark because it does not test anything but a single instruction. This said, you can also try with Basic++ if you want.

 

I afraid that there is not enough ROM space for AtariBasic to be able to support integer variables. It is quite tight already, and Basic++ was a tight squeeze already.

  • Like 2
Link to comment
Share on other sites

Yes, I know it isn't much of a benchmark and it is integers only for FB, except for the time keeping. The laptop used only has a 1366x768 screen and 7 or 8 Altirras running was kind of cramped :)

 

I do plan on doing most Atari BASICs side by side on another laptop. I also have a much more involved benchmark to run one of these first days.

 

Regardless, Fast Basic is very fast.

 

Sent from my Moto G (5) Plus using Tapatalk

Link to comment
Share on other sites

Compilers should always beat interpreters since the program can skip all the parsing.

 

Interestingly, this is not quite the case. First, the parsing step for Atari Basic is done while entering the program, not while executing it. Microsoft BASIC is much worse in this respect as it does parts of its syntax analysis during execution, but Atari Basic does not do that. There is no syntax check during execution.

 

Second, there are interesting counter-examples. The ABC-compiler is not really a compiler at all. It just re-tokenizes the Atari-Basic source code to its own token set, and then executes (or rather "interprets") this token set on its own p-code machine. The only reason why it is faster is that it is integer-based, with one variable being 3 bytes large (i.e. integers up to +/-2^23 are supported).

 

Then, if you look at a compiled Turbo-Basic source, it is just a sequence of "JSR" to the corresponding library functions (mostly). That is not that much different from what happens at interpretation stage, except that there is a loop around it which just picks tokens from the source and then dispatches into a jump-table where each entry corresponds to a basic statement. So what goes away is this "dispatcher", but that is only a relatively small part of the story.

 

What would help is to have a smarter optimizer that detects common patterns and replaces them, e.g. A=A+1 could be replaced by a simple INC if the compiler can prove that A is always an integer and in range of 0 to 255. Or an empty for-loop could be optimized to nothing (since it doesn't do anything).

 

Unfortunately, the compilers back then were not that smart. Modern compilers are.

 

Link to comment
Share on other sites

 

Interestingly, this is not quite the case. First, the parsing step for Atari Basic is done while entering the program, not while executing it. Microsoft BASIC is much worse in this respect as it does parts of its syntax analysis during execution, but Atari Basic does not do that. There is no syntax check during execution.

 

Second, there are interesting counter-examples. The ABC-compiler is not really a compiler at all. It just re-tokenizes the Atari-Basic source code to its own token set, and then executes (or rather "interprets") this token set on its own p-code machine. The only reason why it is faster is that it is integer-based, with one variable being 3 bytes large (i.e. integers up to +/-2^23 are supported).

 

Then, if you look at a compiled Turbo-Basic source, it is just a sequence of "JSR" to the corresponding library functions (mostly). That is not that much different from what happens at interpretation stage, except that there is a loop around it which just picks tokens from the source and then dispatches into a jump-table where each entry corresponds to a basic statement. So what goes away is this "dispatcher", but that is only a relatively small part of the story.

 

What would help is to have a smarter optimizer that detects common patterns and replaces them, e.g. A=A+1 could be replaced by a simple INC if the compiler can prove that A is always an integer and in range of 0 to 255. Or an empty for-loop could be optimized to nothing (since it doesn't do anything).

 

Unfortunately, the compilers back then were not that smart. Modern compilers are.

 

Well, I haven't looked at the internals of Atari BASIC, but in my experience from optimizing the interpreter on the MC-10 (a Microsoft BASIC), the interpreter has to scan for keyword tokens, then parse the parameters to the related functions, convert numeric constants, etc... all while the program is running.

 

Edited by JamesD
Link to comment
Share on other sites

Well, I haven't looked at the internals of Atari BASIC, but in my experience from optimizing the interpreter on the MC-10 (a Microsoft BASIC), the interpreter has to scan for keyword tokens, then parse the parameters to the related functions, convert numeric constants, etc... all while the program is running.

 

Atari Basic really works different, in particular different than Microsoft Basic which was the basis for many comtemporary implementations, as for example the C64 Basic. Atari Basic runs a "pre-compiler" that convers everything into tokes once you enter a line, and then only executes from the tokens. Microsoft Basic is, as said, a different horse.

 

You find a good discussion on Atari Basic here:

 

https://archive.org/details/ataribooks-the-atari-basic-source-book

 

which is really worth reading, and I can highly recommend it, especially the "Pre-Compiler" section which explains how the tokenizer works.

 

Atari Basic could be a very fast Basic interpreter, though suffers from two flaws: One problem is that its stack only stores line numbers and statement offsets, including a linear line scan to restore an operation position. Second, the lousy math implementation which is unnecessary slow.

 

The first problem is fixed in TurboBasic (by using a "hash list" of line numbers) or Basic++ (by pushing in addition absolute addresses and a sequence number on the stack).

 

The second is not really fixable without incompatibilities - one should switch to a different (binary instead of BCD) math model, but this makes programs incompatible. TurboBasic works around this by unrolling many loops, Os++ uses some manual optimizations, but due to limited ROM space, cannot be quite as efficient.

  • Like 2
Link to comment
Share on other sites

Atari Basic really works different, in particular different than Microsoft Basic which was the basis for many comtemporary implementations, as for example the C64 Basic. Atari Basic runs a "pre-compiler" that convers everything into tokes once you enter a line, and then only executes from the tokens. Microsoft Basic is, as said, a different horse.

 

You find a good discussion on Atari Basic here:

 

https://archive.org/details/ataribooks-the-atari-basic-source-book

 

which is really worth reading, and I can highly recommend it, especially the "Pre-Compiler" section which explains how the tokenizer works.

...

I just looked at the book. It calls the tokenizer a "pre-compiler"... which is exactly what Microsoft BASIC has done from day one.

 

Atari Basic could be a very fast Basic interpreter, though suffers from two flaws: One problem is that its stack only stores line numbers and statement offsets, including a linear line scan to restore an operation position. Second, the lousy math implementation which is unnecessary slow.

 

The first problem is fixed in TurboBasic (by using a "hash list" of line numbers) or Basic++ (by pushing in addition absolute addresses and a sequence number on the stack).

 

The second is not really fixable without incompatibilities - one should switch to a different (binary instead of BCD) math model, but this makes programs incompatible. TurboBasic works around this by unrolling many loops, Os++ uses some manual optimizations, but due to limited ROM space, cannot be quite as efficient.

Before you suggest switching to floating point, I suggest you look at how floating point implements multiply, and divide.

 

Link to comment
Share on other sites

Hi!

 

I just looked at the book. It calls the tokenizer a "pre-compiler"... which is exactly what Microsoft BASIC has done from day one.

The differences from Atari BASIC and MS Basic is in how far the pre-compiler processes the source, in Atari BASIC, in addition to tokenizing the statements:

- Parses all numbers and stores them as floating point values,

- Search (and add) variables in the variable-name-table and stores only the variable number,

- Parses operators and stores tokens, differentiating between symbols that have multiple uses, i.e, the "=" to assign, the one to compare floating point numbers and the one to compare strings are all stored as different tokens.

- Parses string constants and store them as length followed by the contents.

 

All of the above means that if you enter "10 PR.001+ MYVAR", it is listed as "10 PRINT 1+MYVAR".

 

During execution, the interpreter still has to employ the shunting-yard algorithm to evaluate expressions, as the tokens are sorted in "display order", but the interpreter assumes that all operations are already valid so it does not need to detect type errors.

 

Before you suggest switching to floating point, I suggest you look at how floating point implements multiply, and divide.

???

 

One of the big slow-downs in the original Atari BASIC is the implementation of floating-point multiplication and division, using repeated addition and subtraction, so multiplying "999999999" by another number is super-slow ( 999999999*111111111 takes 16ms, 111111111*999999999 takes 4ms, 1*999999999 takes 2.8ms, addition takes about 1ms).

 

Have fun,

  • Like 1
Link to comment
Share on other sites

One of the big slow-downs in the original Atari BASIC is the implementation of floating-point multiplication and division, using repeated addition and subtraction, so multiplying "999999999" by another number is super-slow ( 999999999*111111111 takes 16ms, 111111111*999999999 takes 4ms, 1*999999999 takes 2.8ms, addition takes about 1ms).

 

Indeed. The BCD multiply is just repeated addition, which is quite a lousy choice. At least, one could keep some pre-multiplied partial results, and so does TurboBasic (i.e. pre-multiplied results by *2, *4 and *8, for example). Division in BCD is even harder to optimize, and it is here just repeated subtraction. The "bit by bit" binary division is faster as it generates one output bit per loop, unlike the rather lengthy BCD version.

Link to comment
Share on other sites

Hi!

 

 

The differences from Atari BASIC and MS Basic is in how far the pre-compiler processes the source, in Atari BASIC, in addition to tokenizing the statements:

- Parses all numbers and stores them as floating point values,

- Search (and add) variables in the variable-name-table and stores only the variable number,

- Parses operators and stores tokens, differentiating between symbols that have multiple uses, i.e, the "=" to assign, the one to compare floating point numbers and the one to compare strings are all stored as different tokens.

- Parses string constants and store them as length followed by the contents.

 

All of the above means that if you enter "10 PR.001+ MYVAR", it is listed as "10 PRINT 1+MYVAR".

 

During execution, the interpreter still has to employ the shunting-yard algorithm to evaluate expressions, as the tokens are sorted in "display order", but the interpreter assumes that all operations are already valid so it does not need to detect type errors.

Sounds like they did a few things I planned to add to the version of BASIC I'm working with.

I sped up the conversion from ASCII to float, but it really needs to be done during tokenization, along with converting all other constants such as line numbers.

Variables should also be replaced with an index or pointer... but that's a ways off with what I'm working on.

 

???

 

One of the big slow-downs in the original Atari BASIC is the implementation of floating-point multiplication and division, using repeated addition and subtraction, so multiplying "999999999" by another number is super-slow ( 999999999*111111111 takes 16ms, 111111111*999999999 takes 4ms, 1*999999999 takes 2.8ms, addition takes about 1ms).

Multiplication/division works pretty similarly with BCD and floating point math.

Microsoft's floating point math uses lot's of adds/subtracts and bit shifts.

Link to comment
Share on other sites

Multiplication/division works pretty similarly with BCD and floating point math.

Microsoft's floating point math uses lot's of adds/subtracts and bit shifts.

Errr... several things go upside down here. First of all, an alternative to BCD is not floating point, but binary. An alternative to floating point is fixed point, or integer. The ABC compiler uses integers only, and if you want fixpoint, you have to scale yourself.

Floating point requires, of course, as of its building stones, an integer multiplication and division, namely to manipulate the mantissas of the numbers.

 

 

Then, if we compare binary and BCD integers, there is quite a relevant difference as far as multiplication or division is concerned. A binary multiplication algorithm takes out a bit of a time, and if set, adds a copy of the multiplicator to the factor, and then proceeds shifting the two. So it generates one digit per addition, and there is at most a single addition performed per loop, plus two shifts per loop.

 

For BCD, the story is different. The loop is not over bits, but over nibbles, and once you have taken out a single nibble, it requires up to nine additions of the multiplicator to the mantissa (at least with the MathPack implementation, which is rather primitive). So it is in general more complex than its corresponding binary counterpart, and noticably slower.

 

They probably picked BCD back then because it is easier to convert it to and from ASCII, but that is about as far as the advantage goes. The average rounding error for a BCD implementation to the basis of 100 (and this is what the math pack does) is considerably higher than that of a binary implementation - simply because it is easier to "lose digits" in rounding as the exponent can only be given relative to a very coarse basis (approximately 6 bits instead of 1 bit).

  • Like 2
Link to comment
Share on other sites

Errr... several things go upside down here. First of all, an alternative to BCD is not floating point, but binary. An alternative to floating point is fixed point, or integer. The ABC compiler uses integers only, and if you want

<snip>

 

You are oversimplifying the amount of work required for floating point.

There are two shifts, one add/subtract per pass, 8 passes, times 4 bytes for the mantissa in Microsoft BASIC, and you have to deal with the carry between bytes, so another add.

This has to be repeated 4 times to multiply by each byte of the mantissa of the multiplier, and you have to add the results three times.

In addition to that, you have to deal with the sign, deal with different exponents, and normalize the result which requires additional bit shifting.

Since numbers are normally stored in packed form (8 bit exponent, 1 bit sign, 31 bits mantissa), you have to unpack them before performing the math (1 byte sign, 1 byte exponent, 4 bytes mantissa), and repack them after the math is complete. Is that efficient? For memory, yes, for speed no. But that's how Microsoft does it.

 

Bottom line, whether it's BCD or floating point, it requires a lot of work, and I think we agree that fixed point or integers would be preferable where possible.

But fixed point and integers break compatibility with Atari BASIC (and is most cases, Microsoft BASIC). These BASICs weren't written just for games.

If you care about compatibility, you are stuck with at least some BCD or floating point support.

If you want to extend the language with integers, or fixed point, that's fine, but like the ABC compiler, not everything will be compatible.

 

The biggest problem with any of the P-Code compilers I've looked at, is that they don't optimize the code.

Even a simple peephole optimizer should speed programs up noticeably.

Fixed point and integers break compatibility with Atari BASIC. If you care about compatibility, you are stuck with at least some BCD or floating point support.

 

Edited by JamesD
Link to comment
Share on other sites

You are oversimplifying the amount of work required for floating point.

There are two shifts, one add/subtract per pass, 8 passes, times 4 bytes for the mantissa in Microsoft BASIC, and you have to deal with the carry between bytes, so another add.

This has to be repeated 4 times to multiply by each byte of the mantissa of the multiplier, and you have to add the results three times.

In addition to that, you have to deal with the sign, deal with different exponents, and normalize the result which requires additional bit shifting.

 

But this goes for any floating point basis. It does so for the basis of 100 as used by the math pack, and for the basis of 2. Of course you have to compare signs, adjustment of the mantissa and then either add or subtract, but the core of the algorithm is much simpler in binary than it is in decimal.

 

Since numbers are normally stored in packed form (8 bit exponent, 1 bit sign, 31 bits mantissa), you have to unpack them before performing the math (1 byte sign, 1 byte exponent, 4 bytes mantissa), and repack them after the math is complete. Is that efficient? For memory, yes, for speed no. But that's how Microsoft does it.

 

I do not know which format Microsoft basic uses, but it is certainly of advantage to store exponent and sign in one byte, and the mantissa in the remaining bytes. This is how the Motorola "fast floating point" format works.

 

Bottom line, whether it's BCD or floating point, it requires a lot of work, and I think we agree that fixed point or integers would be preferable where possible.

 

Certainly, though this would have required to have a separate flag for integer/floating point, which we do not have.

 

But fixed point and integers break compatibility with Atari BASIC (and is most cases, Microsoft BASIC). These BASICs weren't written just for games.

If you care about compatibility, you are stuck with at least some BCD or floating point support.

If you want to extend the language with integers, or fixed point, that's fine, but like the ABC compiler, not everything will be compatible.

 

The biggest problem with any of the P-Code compilers I've looked at, is that they don't optimize the code.

Even a simple peephole optimizer should speed programs up noticeably.

 

True enough.The ABC "compiler" is just a simple translator from the Atari BASIC tokens to the p-Code of the ABC-interpreter, plus a change from inorder to post-order to make it work with the stack-based p-code interpreter. Everything else was probably too complex for the machine. The Turbobasic compiler handles a couple of simple peep-hole optimizations, but it is also bug-ridden.
Link to comment
Share on other sites

But this goes for any floating point basis. It does so for the basis of 100 as used by the math pack, and for the basis of 2. Of course you have to compare signs, adjustment of the mantissa and then either add or subtract, but the core of the algorithm is much simpler in binary than it is in decimal.

 

I do not know which format Microsoft basic uses, but it is certainly of advantage to store exponent and sign in one byte, and the mantissa in the remaining bytes. This is how the Motorola "fast floating point" format works.

 

Certainly, though this would have required to have a separate flag for integer/floating point, which we do not have.

 

True enough.The ABC "compiler" is just a simple translator from the Atari BASIC tokens to the p-Code of the ABC-interpreter, plus a change from inorder to post-order to make it work with the stack-based p-code interpreter. Everything else was probably too complex for the machine. The Turbobasic compiler handles a couple of simple peep-hole optimizations, but it is also bug-ridden.

Well, the advantage of BCD was never speed, it was guaranteed accurate representation of numbers for financial info.

Floating point libraries had already been published for the 6502 before the Atari came out, so you have to wonder if size, or Atari management wanting a business machine influenced the choice.

 

Floating point definitely works better for using a hardware multiply instruction... which turns it into 16 8 bit multiplies (160 clock cycles 6803, 176 for the 6809), and a bunch of addition. *edit* (112 cycles 6303, 160 for the 6309)

When combined with changing some divides by constants to multiplies by the inverse (1/constant), it makes for a much faster math library.

The Motorola floating point library didn't take advantage of the hardware multiply btw. At least not the version I found.

 

As far as the floating point format goes, it's been a year since I messed with that, maybe I got it backwards, but I'm not sure that would require packing and unpacking the number.

Putting the sign bit with the exponent would be faster, but it cuts the range of numbers in half, where putting it with the mantissa drops the precision by the least significant bit.

 

My eventual goal was to add a token/value to identify the numeric format used for a number to speed up parsing. Integer, float, or a variable.

Yeah, it would break compatibility, but it's not like the MC-10 had a giant software library, and that is to be a later release anyway.

Edited by JamesD
Link to comment
Share on other sites

Well, the advantage of BCD was never speed, it was guaranteed accurate representation of numbers for financial info.

 

Well, in the sense of "conversion to a human readable representation is lossless". In terms of accuracy, binary is much better. But it would be hard to explain people that your account *is* actually kept precise, whereas your account statement on paper may be less precise...

 

Floating point libraries had already been published for the 6502 before the Atari came out, so you have to wonder if size, or Atari management wanting a business machine influenced the choice.

 

Actually, I don't think that Atari was even in the line when this decision was made - at SMI, not Atari. It is hard to tell what the motivation for this choice actually was. Precision certainly not. Probably lack of knowledge...

 

 

Floating point definitely works better for using a hardware multiply instruction... which turns it into 16 8 bit multiplies (160 clock cycles 6803, 176 for the 6809), and a bunch of addition. *edit* (112 cycles 6303, 160 for the 6309)

When combined with changing some divides by constants to multiplies by the inverse (1/constant), it makes for a much faster math library.

The Motorola floating point library didn't take advantage of the hardware multiply btw. At least not the version I found.

 

I don't know whether Motorola ever published a floating point package for the 8-bit, but they did for the 68K, and their fast floating point works as described. 4-byte per number, one byte for exponent and sign,3 bytes mantissa. Not ideal in terms of range, but as quick as you could make it back then. The "multiply by inverse" trick is certainly nice, but only works if you have to divide by constants, or rather, if you have to divide a huge array of numbers by all the same number. Not saying that such applications do not exist - they certainly do. But it is not a silver bullet.
Link to comment
Share on other sites

Well, in the sense of "conversion to a human readable representation is lossless". In terms of accuracy, binary is much better. But it would be hard to explain people that your account *is* actually kept precise, whereas your account statement on paper may be less precise...

...

I don't know whether Motorola ever published a floating point package for the 8-bit, but they did for the 68K, and their fast floating point works as described. 4-byte per number, one byte for exponent and sign,3 bytes mantissa. Not ideal in terms of range, but as quick as you could make it back then. The "multiply by inverse" trick is certainly nice, but only works if you have to divide by constants, or rather, if you have to divide a huge array of numbers by all the same number. Not saying that such applications do not exist - they certainly do. But it is not a silver bullet.

Some numbers cannot be represented exactly with floating point, but can be using BCD. 0.2 for example.

Fixed point math and floating point math are commonly used instead these days, but remember, they use at least double precision instead of single precision (or less) like was used on these old machines.

 

The 6502 floating point routines were first published in the August 1976 issue of BYTE, with some corrections in another issue later that year.

 

Motorola originally sold a math ROM for their 8 bit CPUs, and the source was eventually posted on their BBS.

 

The multiplication by an inverse can be used for a lot of things.

Off the top of my head, I've used it to speed up the ROM ASCII to float conversion, SIN, COS, fourier transforms, and mandelbrot generation... among other things.

No it doesn't always work, which is why I said "some divides", but the 6502 doesn't even have a hardware multiply so I guess it's a moot point anyway.

 

 

Link to comment
Share on other sites

Some numbers cannot be represented exactly with floating point, but can be using BCD. 0.2 for example.

 

Actually, almost all numbers cannot be represented exactly in any number format. (-: From this perspective, a number format to the basis of 12 or 24 might have been more beneficial.

Though this is not quite the point: Given finite precision arithmetic, a reasonable floating point implementation should minimize the rounding loss, and should have the property that the result of the (finite precision) operation is identical to the result obtained in infinite precision, rounded correctly to the precision of the representation. None of that is given with the BCD implementation SMI gave, and IEEE math does that. It requires a bit more care, of course. And, for historical correctness, the Atari Mathpack is older than the IEEE format.

 

Fixed point math and floating point math are commonly used instead these days, but remember, they use at least double precision instead of single precision (or less) like was used on these old machines.

 

Single precision = four byte numbers. Atari BCD = 6 byte numbers. If you ask me, a lousy (lossy?) compromize.

 

The multiplication by an inverse can be used for a lot of things.

Off the top of my head, I've used it to speed up the ROM ASCII to float conversion, SIN, COS, fourier transforms, and mandelbrot generation... among other things.

 

I don't see where you need a division in either of these algorithms. SIN and COS are evaluated using a polynomial approximation, which is a standard method. The polynomials are evaluated by the Horner scheme, which is fine. Mandelbrot requires only addition, subtraction and multiplication. There are two algorithms where a division is needed: Once for the log, using a reflection method, and the other is -a rather silly- implementation of sqrt() in Atari Basic, which can be done much faster and smoother by a digit-by-digit square root function.

 

 

No it doesn't always work, which is why I said "some divides", but the 6502 doesn't even have a hardware multiply so I guess it's a moot point anyway.

 

Not really. A software divide is still slower than a software multiplication.
Link to comment
Share on other sites

Actually, almost all numbers cannot be represented exactly in any number format. (-: From this perspective, a number format to the basis of 12 or 24 might have been more beneficial.

Though this is not quite the point: Given finite precision arithmetic, a reasonable floating point implementation should minimize the rounding loss, and should have the property that the result of the (finite precision) operation is identical to the result obtained in infinite precision, rounded correctly to the precision of the representation. None of that is given with the BCD implementation SMI gave, and IEEE math does that. It requires a bit more care, of course. And, for historical correctness, the Atari Mathpack is older than the IEEE format.

Yeah, but .2 is easily represented by BCD without rounding or loss, which is why I chose it.

It's a perfect example of they they used BCD for financial applications.

BCD is still lossy for calculations requiring a lot of digits, but at least that doesn't start in the first decimal place.

Would a floating point number, even single precision be acceptable?

If you are talking about tens of thousands of financial transactions per day, that type of error adds up in a hurry.

 

Single precision = four byte numbers. Atari BCD = 6 byte numbers. If you ask me, a lousy (lossy?) compromize.

My single precision comment was a bit misleading, as some of Microsoft's BASICs use 5 bytes for added precision without the horribly slow double precision.

The 6502 source code looks like it might have been designed so that they could add as many bytes of precision as they want, though the code and constants included are for 4 or 5 bytes.

The CoCo and MC-10 use 5 bytes, plus an extra byte during multiply and divide, but several versions only use 4.

It's one of the reason's why some Microsoft BASIC machines are faster than others with the same CPU & MHz.

 

I don't see where you need a division in either of these algorithms. SIN and COS are evaluated using a polynomial approximation, which is a standard method. The polynomials are evaluated by the Horner scheme, which is fine. Mandelbrot requires only addition, subtraction and multiplication. There are two algorithms where a division is needed: Once for the log, using a reflection method, and the other is -a rather silly- implementation of sqrt() in Atari Basic, which can be done much faster and smoother by a digit-by-digit square root function.

I just sped up the ROM, you'll have to ask Microsoft why they did it that way.

I fixed SIN, then found other functions used it's code and were also sped up as a result.

 

Keep in mind that what might seem silly now, might not have seemed so silly back then.

 

The mandelbrot code I ported has a divide. Shoot the original author.

 

Not really. A software divide is still slower than a software multiplication.

Well, I guess Monte Davidoff (he wrote Microsoft's original floating point library) wasn't aware when he wrote the first version.

The code was probably just duplicated after that.

Link to comment
Share on other sites

  • 4 years later...

So on my Atari 1200XL (256k Rambo / UAV are the only mods) I did a OSS BASIC XE (cart) vs Altirra Basic (atr) using AHL benchmark test. 
OSS BASIC XE - 49.45 seconds
ALTIRRA BASIC 1.55 - 296.52 seconds
Why is the ALTIRRA 6x slower ? I thought ALTIRRA was a VERY FAST basic.

Link to comment
Share on other sites

17 minutes ago, Ricky Spanish said:

So on my Atari 1200XL (256k Rambo / UAV are the only mods) I did a OSS BASIC XE (cart) vs Altirra Basic (atr) using AHL benchmark test. 
OSS BASIC XE - 49.45 seconds
ALTIRRA BASIC 1.55 - 296.52 seconds
Why is the ALTIRRA 6x slower ? I thought ALTIRRA was a VERY FAST basic.

Altirra BASIC still has to use the math pack built into the OS for math operations. BASIC XE can afford to use a faster math pack as it has a lot more ROM space + a disk-based extension.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...