Jump to content
rpiguy9907

What is your retro computing most "irrational want?"

Recommended Posts

It's been a few months and I still get the urge to gut a half-working model II or III and stuff in an i7 or R-Pi to play Lunar Lander. Not interested in the TRS-80 hardware itself anymore, it's bulky and its logic is low-density. It means little to me in this day and age. I have more nostalgia for a 486 or Pentium II/III. Thus the TRS-80 stuff is now quite boring. And I would guess a constant maintenance headache unless this this and this was done.

 

Though I always ALWAYS (irrationally) wanted one of those rigs as a kid so I could make-believe I was playing in Mission Control. After all, both computers have that "console angle" and theme to their style.

Edited by Keatah
  • Like 1

Share this post


Link to post
Share on other sites

The IBM System/23 Datamaster has entered my radar. Probably the most irrational thing I want currently.

Share this post


Link to post
Share on other sites

^^ I had to look that one up. At least it isn't a piece that would fill half your garage.

  • Like 1

Share this post


Link to post
Share on other sites
20 minutes ago, carlsson said:

^^ I had to look that one up. At least it isn't a piece that would fill half your garage.

Heh, name kind of sounds like a huge mainframe or minicomputer, eh?

8 inch floppies intrigue me. Combined with the Model F style keyboard and [email protected] and BASIC, it seems nice. Very expensive though, as you may have guessed. The only one I found for sale was nearly $1000 in untested condition. Nope. Apparently it retailed for $9000 back in 1981.

Share this post


Link to post
Share on other sites

Yes. A friend of mine had an IBM System/34 (might've been a 5340 or at least something close to that size) in his garage for a couple of years until another fellow collector came to pick it up from him.

Edited by carlsson
  • Like 1

Share this post


Link to post
Share on other sites

Those huge IBM mainframes always reminded me of dishwashers or washing machines.

Share this post


Link to post
Share on other sites

I want an SGI O2.  Developed Ooze on one of those, running Stella and a Linux build of Batari Basic.  It was killer.  System had video capture, so I could test on a real VCS, in a window too.

 

The O2 isn't the fastest, though people have souped them up with retrofitted CPU's.  Overall though, you can do just about anything on one.  I used to do a lot of big solid modeling on the one I had.  64 bit OS, several GB of RAM.  Was killer in the 90's and early 00's.

 

One thing about that machine was shared memory graphics.  An O2 can take video in, overlay it onto a warped surface, and or send it right back out again in real time, and can do that at a couple hundred Mhz clock too.  Want to map a huge image onto a surface?  Yeah.  And it has some operations possible while doing that.  These features didn't see the use they should have.

 

They are really fun machines!  

 

But, I'm not gonna get one.  Just will want it for a while.  At one point, I knew a ton about IRIX and was doing admin for machines all over the place.  Good times, and often done on one of those free Juno or NetZero accounts while travelling around too.  LOL.  When I was done, I was totally done.  Unloaded it all and kept nothing.  Still done, but...  LOL, that's irrational for you.

 

 

 

  • Like 1

Share this post


Link to post
Share on other sites
9 hours ago, potatohead said:

I want an SGI O2.  Developed Ooze on one of those, running Stella and a Linux build of Batari Basic.  It was killer.  System had video capture, so I could test on a real VCS, in a window too.

 

Fellow IRIX / SGI fan here as well; I really miss the machines from the days when they made their own hardware.

 

Strange but true: for a few years, I ran a pair of public-facing DNS servers (BIND, IIRC) on Debian (again, IIRC) on a couple of Indys.  Uptime and load handling were unbelievable, and not being concerned about x86 shellcode was a plus 😁

Edited by x=usr(1536)

Share this post


Link to post
Share on other sites

I would like an Apple II accelerator. There are no practical use cases for this, all games are optimized to run at 1Mhz, but for some reason having an accelerator just seems cool. 

 

No one is still running Visicalc or writing novels on Apple II's so accelerators are completely irrational, but I still want one.

Share this post


Link to post
Share on other sites

I don't write novels or uber-lengthy works or anything. But I do fill up that RAMworks card pretty good though.

Share this post


Link to post
Share on other sites
17 hours ago, rpiguy9907 said:

I would like an Apple II accelerator. There are no practical use cases for this,

I'm not all that well versed in the A2 world. Do people usually pack and crunch software to shorten load times, like they do on e.g. C64 and Atari? If so, a temporary turbo/warp mode could come handy to speed up the computer while decrunching, doing various setup calculations and alike. Programs plotting the screen in ways that take an eternity. I had a friend over on Sunday and while we concluded that loading times can be sped up (on the C64) with various fast loaders, setup and execution time can't in the same way as an emulator can be set to 200% or warp mode.

Share this post


Link to post
Share on other sites

Packing and crunching wasn't very common BITD, but more recent cracks do use them.

 

Some years ago I introduced Exomizer from the C64 world to the Apple ][ world.  qkumba uses a cruncher on his cracks too.

Share this post


Link to post
Share on other sites

Don't forget that the Apple II drives are among the fastest in town. Takes but 28 seconds to duplicate a full disk using two drives. And there are fast-load DOSes that are right on the disk. And of course the aforementioned compression routines.

 

Accelerators have little or no effect on disk access. Disk access on the Apple II is designed around the 6502 running at stock speed. The main CPU in the console itself temporarily becomes part of the controller so to speak, as there is none in the drive or on its interface card. And this CPU needs to be in lock-step with the drive's rotational activity. So little or no gain to be had there.

 

All accelerators drop to 1MHz or access the drive controller's slot at that speed.

 

 

Share this post


Link to post
Share on other sites

Yeah, I was thinking about transfer speed vs time to decrunch. Of course programs also get smaller so you can fit more on each disk (or on your own FTP server) so even if the drive is so fast that loading 10K extra takes as much time as it takes to decrunch 10K extra, you have space savings.

 

Edit: I found some tables in the summary for Subsizer. Here we have a program that is 54105 bytes before crunching. The two best crunchers bring it down to 30095 respective 30690 bytes. However the decrunch time differs greatly: alz takes 74 seconds to decrunch which equals just below 325 bytes/second (2600 bps) while Exomizer takes 9 seconds = almost 2550 bytes/second (20400 bps).

 

I'm unable to find the Apple II Disk transfer rates, so someone fill me in. If a 140K disk takes 28 seconds to duplicate between two drives, that is 5K/second = 40000 bps but I'm not sure the data is parsed and loaded into RAM in that process or just shifted over between the two drives. However it would indicate that a transfer speed of 20000 bps is doable, which is about the maximum on compression/decrunch speed.

 

For comparison, a C64+1541 with Action Replay 6 is said to have transfer speed of up to 5700 bytes/second = 45600 bps and an SD2IEC with JiffyDOS or SJLOAD can do 8600 bytes/second = 68800 bps?!? In a such environment, crunching only is made for space saving reasons. Already with Final Cartridge 3 on a 1541 you probably could load an uncrunched game faster than it took to load + decrunch it.

Edited by carlsson

Share this post


Link to post
Share on other sites
6 hours ago, carlsson said:

I'm not all that well versed in the A2 world. Do people usually pack and crunch software to shorten load times, like they do on e.g. C64 and Atari? If so, a temporary turbo/warp mode could come handy to speed up the computer while decrunching, doing various setup calculations and alike. Programs plotting the screen in ways that take an eternity. I had a friend over on Sunday and while we concluded that loading times can be sped up (on the C64) with various fast loaders, setup and execution time can't in the same way as an emulator can be set to 200% or warp mode.

I find the 'turbo mode' of emulators useful for games like "Lords of Conquest", where the computer player can spend a lot of time "thinking" about its next move.   Maybe it would help chess games too.

 

6 hours ago, The Usotsuki said:

Packing and crunching wasn't very common BITD, but more recent cracks do use them.

It was common among cracked software back then.   The pirates wanted to compress the disk so they'd have space to put up their flashy load screens, enable trainers, cheats, etc.

Share this post


Link to post
Share on other sites
1 minute ago, zzip said:

It was common among cracked software back then.   The pirates wanted to compress the disk so they'd have space to put up their flashy load screens, enable trainers, cheats, etc.

That was more a C64 thing than an Apple thing.

 

In fact, from what I can see, trainers were relatively rare on the Apple ][ (though they did exist), and said load screens were relatively limited.

Share this post


Link to post
Share on other sites

Ok well saying there are "No" use cases was probably too absolute, but I don't think there are enough of them to make an accelerator rational 🙂

 

Like I am sure some flight sims would be smoother with a faster processor, etc. but the vast majority of games would be less playable/too fast.

 

I can see using an accelerator on the IIgs for sure, which runs GS/OS pretty slowly, but I was referring to earlier Apple II models.

Share this post


Link to post
Share on other sites

I just picked up a speccy +2 and a BBC B and I'll probably get a few more when I can but some things are priced too highly.  If anyone wants to buy me one I'd like:

 

PET

SAM Coupe

Vectrex

 

If I had to choose it would definitly be the PET.  There's something about that old beast that I absolutely love.

 

I think these are totally rational choices though, I can't think of anything irrational that I want

  • Like 2

Share this post


Link to post
Share on other sites
7 hours ago, carlsson said:

I'm unable to find the Apple II Disk transfer rates, so someone fill me in. If a 140K disk takes 28 seconds to duplicate between two drives, that is 5K/second = 40000 bps but I'm not sure the data is parsed and loaded into RAM in that process or just shifted over between the two drives. However it would indicate that a transfer speed of 20000 bps is doable, which is about the maximum on compression/decrunch speed.

To clarify:

I ran a Locksmith FastBackup test. It took about 18-19 seconds to read the disk into memory AND then write it out to the 2nd drive. 25-26 seconds if you include a 2nd read pass to verify the written disk.

 

Locksmith FastBackup routines read in the entire 143K disk in about 8-9 seconds. Give or take a fraction. That's about 18KB per second. And that's as fast as you can go using the standard 19-pin Disk II connector cable.

 

In more real-world usage, ProDos can put data into memory at about 8KB/sec.

 

Apple states indirectly that the original DISK II is 125Kbps in Apple /// literature.

 

I suppose putting a scope on the latch that clocks the data into main memory would give the exact answer. Just too lazy to do it now.

 

 

 

 

 

 

 

 

Share this post


Link to post
Share on other sites

Yeah, with those transfer speeds you really don't gain a single second from crunching, only space on the floppy disk if that is something you need.

Share this post


Link to post
Share on other sites

There's always this "magic" point where CPU speed + reduced quantity of data being transferred can outperform no compression. In the PC world I think it was somewhere in the 386/33 - 486/25 timeframe.

 

It's never been about speed though. At least for me. Compression was always used to increase disk space. Increase the amount of wArEz that could fit on a floppy. And I recall doing that very early on with the Apple II. First with things like DDD and cousins. Then picture-packers, to double and triple the amount of images per disk.

 

We did discover though that running a compression program to convert a full or partly-full disk to single-file for easy modem transfer DID result in faster speeds. This because it eliminated partly filled tracks and/or zero'd them out. This in the 300/1200 days.

 

Edited by Keatah

Share this post


Link to post
Share on other sites
14 hours ago, Keatah said:

There's always this "magic" point where CPU speed + reduced quantity of data being transferred can outperform no compression. In the PC world I think it was somewhere in the 386/33 - 486/25 timeframe.

 

It's never been about speed though. At least for me. Compression was always used to increase disk space. Increase the amount of wArEz that could fit on a floppy. And I recall doing that very early on with the Apple II. First with things like DDD and cousins. Then picture-packers, to double and triple the amount of images per disk.

 

We did discover though that running a compression program to convert a full or partly-full disk to single-file for easy modem transfer DID result in faster speeds. This because it eliminated partly filled tracks and/or zero'd them out. This in the 300/1200 days.

 

Yeah, I never thought it was about speed on the floppy itself, just increasing storage.   maybe on the C64 with its notoriously slow floppy it had benefits, and that's why it was so common?

 

It definitely made a huge difference in downloading though.  For awhile there, it seemed like every week there was a new hot compression program that compressed better than last weeks--  squish, squeeze, crunch, arc, arj, etc.

Eventually modems added hardware compression

Share this post


Link to post
Share on other sites

Of course it depends on how efficient the crunching algorithm is and how time consuming the decruncher is. As seen in the link to CSDb, they compared a number of programs:

 

Subsizer versions 0.5 and 0.6, with modes for clean (doesn't affect zeropage and system RAM) and dirty (a little more code, clobbers most of zeropage and the stack).

Exomizer version 2.09, ALZ64 (LZMA based), LZMPi, Pucrunch with four different settings, Bitnax v0.6, Doynamite v1.1, BB v1.1 and 2.0.

 

Out of those, ALZ64 appears to crunch most efficiently, but is super slow at decrunching (about 300-600 bytes/second depending on the content). Exomizer and Subsizer both are in the top 5 on smallest size and Subsizer also is quite fast (about 3000-6000 bytes/second for the clean version, 4000-8000 bytes/second for the dirty version). Bitnax is even faster, 4800-8500 bytes/second but doesn't yield quite as good compression result so files would take slightly longer to load but decrunch slightly faster.

 

Let's assume the following:

 

An uncrunched program that is 48K, possible to crunch down to 32K with a decompressor which takes 3000 bytes/second (i.e. it takes 2.73 on a 1 MHz 6502 to restore it).

 

Apple II with a transfer speed of 8 kilobytes/second.

C64 + 1541 with a transfer speed of 400 bytes/second (i.e. 1/20 of the Apple II).

C64 + 1541 + Final Cartridge 3 with a transfer speed of 4150 bytes/second

 

On the Apple II, the crunched program would take 4 seconds to load + 2.73 = 6.73 seconds before it starts.

The uncrunched program would load in 5 seconds blank so adding the cruncher actually wasted a few seconds, but of course saved 16K disk space.

 

On the original C64, the crunched program would take 82 seconds to load + 2.73 = 84.73 seconds before it starts.

The uncrunched program would load in 102 seconds, so besides saving 16K disk space we also saved about 17-18 seconds.

 

On the C64 with FC3, the crunched program would take 8 seconds to load + 2.73 = 10.73 seconds before it starts.

The uncrunched program would load in just under 10 seconds, so the time gain would be nearly zero, except for the space savings of 16K.

 

Apparently in this scenario a transfer speed around 4000 bytes/second is the cut-off for time savings. This can be compared with the modem speeds mentioned above, 300 bps = 37.5 bytes/second and 1200 bps = 150 bytes/second. Not at all surprising that the compressor saved a lot of time transferring your disk images since even your fastest modem would transfer data at a rate that is about 2% (!!) of the disk transfer speed.

 

There is another (older) comparison here, with more and different routines.

https://csdb.dk/forums/?roomid=11&topicid=114681&showallposts=1

Share this post


Link to post
Share on other sites

All this talk reminds me of the discussions about pushing a console's hardware.

 

When the Disk II came out it was a fantastic relief from cassettes. But over the years it was fun to watch its speed increase through improved DOS routines. Same with capacity, though indirectly via compression utilities like DDD and AXEpacker. But most exciting of all were the new commands from DOSes like ProDOS, Diversi-DOS, David-DOS, HyperDOS, and Pronto-DOS.

 

While not intrinsic to the Disk II mechanism in any way, shape, or form, us kids at the time thought it magical that the chips on the controller card and inside the drive could learn new things.

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...

  • Recently Browsing   0 members

    No registered users viewing this page.

×
×
  • Create New...