Jump to content

Reciprocating Bill

New Members
  • Content Count

    20
  • Joined

  • Last visited

Community Reputation

41 Excellent

About Reciprocating Bill

  • Rank
    Space Invader

Recent Profile Visitors

139 profile views
  1. Running Cortex Basic out of the FinalGrom (rather than all in 16-bit RAM), Walter is complete in 6 minutes, 31 seconds (same console).
  2. Time on my 16-bit 99/4a running Cortex Basic (the version running strictly out of RAM) is 5 min, 58 seconds. (Is that Walter White under the hat?)
  3. From the infamous 2nd BYTE Sieve article (BYTE Magazine 1/83): The fastest Z80 timing on the BYTE Sieve was 6.8 seconds (assembly). The clock isn't stated. The fastest 6502 version was 13.9 seconds (assembly). OSI Superboard (clock not stated). I get 6.3 seconds on my TI in assembly - but that's with 16-bit RAM installed. Stock console, a bit over 7 seconds with registers AND code in scratchpad (how often can you do that?), about 10 seconds with registers in scratchpad and code in standard RAM (the most common configuration). 15 seconds when everything, including registers, is in 8-bit RAM. Fine for non-performance sensitive code. Freeing the CPU with 16-bit RAM seals the deal. But then SAMS is ruled out. Gotta choose. https://archive.org/details/byte-magazine-1983-01-rescan/mode/2up
  4. My recollection is that adding the 32K RAM expansion to Extended Basic yields only a very slight (~ 2%) increase in execution speed (which was disappointing, once upon a time). Which is to say that running Extended Basic entirely out of VDP doesn't result in a significant speed penalty. Perhaps the 32K expansion-aware code isn't very well optimized?
  5. I built a small laptop fan into the bottom of the console, just under the power supply. It's very effective. My first attempt exhausted air forward, but I found that surprisingly annoying. I've since redone it with the fan blowing to the right side.
  6. I mounted a small laptop fan directly below the internal power supply, powered by it's own 3v power source. The other bottom vents are covered - I wanted to encourage some circulation from the back vents over the logic board. It moves a significant amount of air out of the console and keeps the mug warmer at room temperature. One symptom of a marginal power supply is a modest flickering of display brightness. Typically absent at first, but increasingly evident after 30 - 60 minutes of use. In a recent instance a swap of power supplies eliminated said flickering.
  7. Although I haven't had problems with my FG99, I worried about all that heat rising through the mug-warmer and into the cartridge. So I installed a 5v laptop fan directly below the power supply after cutting out the right-bottom ventilation grill. Just fits, without raising the console. I supply it 3v from an adjustable external power supply (a bit noisy at 5 volts - needless to say I've installed a quieter fan in my PEB). I covered the other two bottom grills to encourage circulation from the back of the console over the logic board clamshell. At 3v the fan exhausts a lot of air and very effectively keeps the mug-warmer cool. Lastly, it so happens that the cartridge into which I installed the FG99 has an open front. Here's hoping this all extends its life.
  8. The Myarc disk controller does essentially that. Call Dir(1) displays a catalog of dsk1 in TI-BASIC (and all the Extended BASICs). No need to OLD a program in - it is built into the DSR.
  9. Here's an obscure data point: FIB2 running on Wycove Forth 3.0 (an extended FigForth) on a 16-bit console. This version of Forth benefits significantly from the memory upgrade. (Benchmark exactly as published): 1' 17" For me the relevant comparisons (of the feverish retrocomputing variety) are with contemporaries of the TI: C64 Forth64: 3’ 50” C64 Durex Forth 1’ 57” Apple II v 3.2 3' 56" Apple II GraForth 2' 19" Z80 4Mhz FigForth 1’19”
  10. There are workarounds, to be sure. I enjoy coding stuff in assembly like the Snowflake in my avatar, an instance that requires a lot of floating point, including sines and cosines. The avatar is four superimposed levels of the snowflake, ~2800 line segments. Calculating and drawing those coordinates using the FP routines in ROM takes 1' 43" on my 16-bit console. I also save the coordinates to an 11K array (I originally stored byte-sized coordinates into word-sized memory locations due to laziness and having ample RAM, hence the double-size array). "Playback" of the snowflake then takes takes 3.6 seconds. Last step was to SAVE a memory image of the array to disk (two files). Having done this once, a cut down version of the program (all the calculations cut out) LOADs the memory image back into the array, which takes about 2.5 seconds from the TIPI, then begins fast playback. Saving memory images of lookup tables for SIN and COS would be the more generalized next step.
  11. Could be. That's why I pose it as a question: "Would shipping a couple of BCD arguments and an opcode to the PI through the registers on the TIPI, then retrieving results, be equally time consuming?" We'd be transferring a handful of bytes each way. I think that would be worth testing. I doubt the TI would ever have to twiddle bits waiting for a response. My 2012 MacBook Pro running Chipmunk Basic calculates ~60 million sines and cosines per second. While the Broadcom in the PI is probably not quite that fast, whatever it's speed (I'm assuming floating point instructions in hardware) it's going to be finish a transcendental calculation and have time for a nap before the TI executes a single assembly instruction. The fastest software transcendentals I've seen on the TI are found in Cortex Basic, which is often 3-4x faster than Extended Basic/fbForth with this sort of maths.
  12. This would be worth testing. Check my reasoning: Extended Basic performs 100 sines and cosines in a For-Next loop in about 15-16 seconds. fbForth (to my surprise) isn't significantly faster performing the same calculations, using the Geneve-derived functions. Thats around 13-14 FP transcendentals per second - a LOT of assembly instructions per calculation. Would shipping a couple of BCD arguments and an opcode to the PI through the registers on the TIPI, then retrieving results, be equally time consuming? From the perspective of the TI, the PI and Python calculations, exclusive of this overhead, would be next to instantaneous.
  13. Hello all - Just wondering: has anyone utilized the TIPI as a FP math coprocessor for the TI? Seems like a natural fit, passing arguments and results back and forth by means of the messaging interface.
  14. The letter doesn't say. But given the apparent expense of CPU time, I think that assumption is safe. Letter vis queens.pdf
  15. Putting this in perspective, I found online a letter written in 1973 (By Edward Reingold at the University of Illinois at Urbana) giving a first report of the number of solutions for 14x14 and 15x15 N queen puzzles. The 15x15 puzzle was solved on an IBM 360/75 in 160 minutes (He remarks, "I have no idea where the student got the money"). NASA used four of these during Apollo. I wouldn't attempt that on the TI, but running an interpreted BASIC on my 2012 MacBook Pro (Chipmunk BASIC, on which all variables are floating point) I get all 2,279,184 15x15 solutions in 35 minutes. The point being: sometimes we (or at least I ) don't appreciate what we have, computationally, these days.
×
×
  • Create New...