Jump to content
IGNORED

Why did FP use BCD?


Recommended Posts

Given that there were already a number of publicly available FP libraries using 32 and 40-bit binary representations, why the heck did SMI/OSS write their own using BCD?

 

For instance, Woz had already published a complete FP package in 1976 in Dr. Dobbs, and Atari had already licenced MS 8k basic with their 32-bit code (I'm assuming it wasn't the 40-bit code in the 8k version).

 

They suggest they did this to avoid rounding and presentation problems, but those are minor advantages at best. It seems like they went to a lot of effort to make a system that offered very little in upside and a whole lot of downside in both time-to-complete-the-contract and the performance. However...

 

* has anyone compared the binary size of the original Atari code to Woz's or MS 8k? I would suspect the BCD version would be larger due to the other two being written for extremely memory-constrained systems

 

* given the performance of TURBO-BASIC XL's math package, is  BCD FP code *inherently* slower on the 6502? IIRC, turbo blew by Apple by well over 2x that the clock speed might suggest on sieve and Ahl's.

  • Like 1
Link to comment
Share on other sites

Lots of bits and pieces to the story, and there is no one, single short answer that I've ever seen in one place. If you haven't, start with these links. Make time as well for that last one - Bill has passed now, I believe, and that interview is just wonderful.

 

https://www.laughton.com/paul/abps/oss/two_births.html

 

https://www.atarimagazines.com/rom/issue7/interview.php

 

https://www.atarimagazines.com/compute/issue57/insight_atari.html

 

https://ataripodcast.libsyn.com/antic-interview-7-the-atari-8-bit-podcast-bill-wilkinson-oss

  • Like 3
  • Thanks 1
Link to comment
Share on other sites

On 6/21/2019 at 7:24 PM, towmater said:

avoiding rounding errors seems like a perfectly valid explanation, and is reason enough by itself.

Really? The resulting code is much slower and I suspect people would choose performance over the relatively non-invasive rounding error every time. It's not like the code didn't still round off values during calculations, it only avoided a single type of problem.

 

Quote

Make time as well for that last one - Bill has passed now

 

Yeah, I "interviewed" him in email some time ago now. I guess the choice of BCD will remain a mystery.

Link to comment
Share on other sites

I suspect that the reasons weren't entirely technical. IIRC, Atari initially asked to SMI to port MS Basic, which would probably mean using MS FP routines. But they rejected the idea and offered to write a new Basic from scratch. Sounds like there were some commercial motivations for the whole approach.

 

Edited by ijor
Link to comment
Share on other sites

Everything indicates that the most likely reasons were:

 

  1. Complete disregard for publications-driven performance reviews and comparison with other equipment.
  2. Nature and intent of Atari Basic, itself: an all round simple and compact package to easily tap on the machine's unique traits and benefits, were traditional programming also met color graphics, sounds and input devices.
  3. Adherence to fit in 8K linear space, so free memory would be enough to develop games and starter applications. Anything less and you end up like IndusGT Diagnostics, which runs of Microsoft Basic I, and crashes constantly due to ram exhaustion...

 

That would be all, I think...

Edited by Faicuai
  • Like 1
Link to comment
Share on other sites

All of the above I think. Remember that the original design goals for our computers were only 4k and 8k for a 400 and 800. It would have made sense to do line numbers as integers vs FP but I suspect a couple of bytes difference vs speed, bytes won. It wasn't like they anticipated 48k Atari 400 computers running a BBS or databases.

 

Bill was also not officially a computer guy by education. He was a math major at Berkeley IIRC. Different sensibilities. That he would choose accuracy over speed isn't a surprise.

 

Besides, BASIC just wasn't that popular at the time. The major system/programming was still in machine language from the wild success of the VCS. I remember we bought a DEC mainframe at the time ~1978, vague memories but I think we spent ~$500k-$750k, and the IT guys refused to get the BASIC for it as a $7000 option. They specifically said 1) BASIC was not a real programming language and 2) We have no intention of letting anyone use the computer other then from within the IT group so making it accessible to others is not in the plan.

 

BASIC really became something it was not intended to be. In hindsight it would have been better to tweek the language from the start, but with its popularity we got everything from OSS BASICXL to TurboBASIC and people are still tweeking it literally to this day. Kind of a nice testament to the language in that competitors like FORTRAN have fallen into disuse while structured and compiled BASICs are still going strong 40 years later.

Link to comment
Share on other sites

I wonder if BCD FP arithmetic is really smaller than binary FP arithmetic though?  I guess the 6502 doesn't have a multiply or divide instruction so it's all shift and add/subtract anyway.  Yeah it has decimal mode, but the code to do binary mode will be fundamentally the same.  Add and subtract will still require the same type of shift to align either way.  So I'm not sure code size is really that different either way.

 

Interesting tidbit:  Even though the X86 has div and idiv, FP divide on MS Basic on PC is still implemented with the shift and subtract method.

Link to comment
Share on other sites

Atari has a 128 byte area LBUFF where FP expansion/calculation can take place (though it's also shared as the input buffer for E:)

 

C64 uses binary FP but doesn't seem to have any large work area for it.

I think the reasoning is probably a combination of wanting to crunch Atari Basic into 8K which probably distracted from doing the FP which meant they took the easy approach of using the existing available inefficient one.

Link to comment
Share on other sites

10 hours ago, Rybags said:

I think the reasoning is probably a combination of wanting to crunch Atari Basic into 8K which probably distracted from doing the FP which meant they took the easy approach of using the existing available inefficient one.

 

Let's not forget that, precisely because of the FP routines, it doesn't actually fit into 8K. It "borrows" an extra 2K ($D800-$DFFF) from the OS space.

  • Like 1
Link to comment
Share on other sites

Yes - though the approach taken for Atari Basic seems to be efficiency in the sense of fitting a lot of stuff in, which comes at the cost of speed.  It's feature-rich compared to C= V2.0 Basic (C64) by comparison.

Though that said, the OS is providing a fair bit of that capability especially the better graphics and file handling courtesy of CIO.

  • Like 4
Link to comment
Share on other sites

On 6/25/2019 at 5:16 PM, R0ger said:

You don't avoid rounding in operations. You do avoid rounding in conversion to and from text. That's sometimes significant. But I'm not sure it's the reason.

I'm sure that is the reason. I see that now, after thinking about a different issue in another thread.

 

By using BDC they can have exact representations of small integers. So that lets them tokenize constants into an FP and know that when you LIST you'll get back the same number, exactly.

 

In contrast, MS basic uses binary FP, which cannot be exactly reversed. So they leave their constants in text form, and run the text-to-fp routine every time.

 

Skipping that conversion is likely a major speed boost. Whether that boost makes up for the general performance of BCD as opposed to non-BCD FP code is something I would love to explore.

Edited by Maury Markowitz
Link to comment
Share on other sites

On 6/27/2019 at 3:44 PM, Maury Markowitz said:

By using BDC they can have exact representations of small integers. So that lets them tokenize constants into an FP and know that when you LIST you'll get back the same number, exactly.

Ehem. First, binary representation with an N-bit mantissa is able to represent N+1 bit integers precisely (with implicit 1-bit). Thus, a 1 byte exponent/sign plus a 2-byte mantissa would be entirely sufficient to represent line numbers exactly. Actually, any 16 bit (two-byte) number. So that is not an advantage.

 

Second, BCD *is not* more precise that binary. It is actually much worse. There is a precise mathematical analysis of the problem, but to give simple hand-waving argument: If you round off a digit in BCD, you loose log_2(10), i.e. about 3 bits of precision at once. With binary, only one bit. Thus, there are much less "choices" for the "dot" between the integer and fractional part, and this costs precision.

 

Even worse, the Atari BCD implementation is not even BCD, but uses a basis of 100. Hence, it looses more than 6 bit per rounding operation...

 

The *only* advantage is that conversion to and from ASCII is relatively simple. At the price of making multiplication and division a lot worse.

 

All in all, a really naive and bad choice.

  • Like 2
  • Thanks 1
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...