Jump to content

drac030

Members
  • Content Count

    2,065
  • Joined

  • Last visited

  • Days Won

    1

drac030 last won the day on July 26 2016

drac030 had the most liked content!

Community Reputation

1,145 Excellent

2 Followers

About drac030

  • Rank
    River Patroller
  • Birthday 07/28/1970

Contact / Social Media

Profile Information

  • Gender
    Male
  • Location
    Warszawa, Poland

Recent Profile Visitors

17,992 profile views
  1. This implies that Atari started to sell 8-bit machines in "eastern Europe" in late 1989, certainly after November 1989. However, the XE series was available in official (= government-run) retail network called Pewex at least since 1987 (I bought my 65XE there in February 1988) and there was also official support and service. The XE computers being sold then were 65XE (w/o ECI) and 130XE. BTW. you are surprised that wikipedia is wrong?
  2. It is probably documented somewhere, but I cannot find the answer, so I though I would ask here. It is said that after the NMI line goes low, the CPU cannot accept another NMI until the NMI line goes up and low again. So, assuming that there are glitches on the NMI line, so that after the initial, full pulse the line shortly goes high and then low again (uncontrollably or under external control, no matter), so that there are effectively multiple valid NMI pulses within the span of, say, 20-30 CPU clock cycles, then, after the CPU starts servicing the first NMI pulse of the series, when will be the earliest point (or time), in which it will be able to accept and start servicing another NMI? I.e. immediately after loading the vectors to the PC, or perhaps after the first instruction of the handler?
  3. I think the required function in question may be the high memory allocator. But (according to the changelog) I have added it 14 years ago, so I am not sure if it is so recent
  4. Exactly. It is yet worse, because the XE machines have a completely different PBI connector (known as CART/ECI). Therefore e.g. the (already several times mentioned) IDE+ interface has two connectors, one for 800XL, the other for 130XE. A standalone card docking device (like 1090 was meant to be) with an additional ability to house standalone devices in the slots would be very nice, but as far as I know nobody has so far successfully built such a thing. It is probably not much wanted anyways and most PBI devices are HDD interfaces, where one such device is sufficient for most uses. But it is also sort of chicken and egg problem: there is no expansion box needed, because there are no expansion boards, and there are no expansion boards, because there is no expansion box. Someone may build one, but then some expansion boards must be designed so that the expansion box would immediately have some use... then you wait 20 years before people stop looking at the thing as at a terrible novelty (such as VBXE, it was designed about 15 years ago if I am not mistaken, so it is already longer on the market than the entire Atari 8-bit line ever was)... and so on.
  5. You may need to set Rapidus OS as OS, as these programs mostly run in the native mode, thus extended interrupt services are required. Otherwise I am clueless. I must admit that I have never tried to configure Altirra in Rapidus mode, so I do not know how this works. The generic emulation (CPU 65C816/21 MHz + 4 MB high memory + Rapidus OS) should be enough to run most programs, so I use such a setup whenever I need to check how the code behaves in slightly different environment.
  6. The Karin-Maxi and IDE+ (which I mentioned above), and several other devices I believe, use $D1xx and do not conflict when inserted at the same time because of using the sharing mechanism you seem to be speaking of. So I am not sure if it can be said that it was "never really utilized"...
  7. I know one device which has the pass-through connector, it is the parallel Karin-Maxi floppy drive. It has two CART/ECI connectors, so effectively three PBI devices may be attached at a time. Couting the Rapidus accelerator in, four IDE+ does not have the pass-through connector, but other HDD interfaces may do. There were also several attempts (I know of) to build a separate board which could contain multiple PBI connectors, but I am not sure if any of these projects has been finished.
  8. I am afraid that there is plenty of devices (HDD interfaces, mainly) which use $D1xx. But yours should not conflict as long as the registers are activated and deactivated according to the corresponding bit in $D1FF register set or cleared. But still, some devices, even if they map and unmap the I/O registers on $D1xx as they should, may also keep specific control registers mapped there all the time, it depends on the actual device.
  9. If I calculate correctly, slowing the rotation down to 288 rpm gains 250 bytes per track in MFM. But yes, there is probably a good reason why they (or anyone?) did not use floppies with even less rotations per minute. This could probably be solvable by slowing the rotation down only in the new density (MFM). I doubt though if such a drive could be cheap. Anyways, the 1450XLD catalogues (attached above) speak of the drives being "connected directly to the computer's processor bus" and therefore being much faster than an ordinary 1050. Other sources (like this one http://www.atarimuseum.com/computers/8BITS/XL/1450xld/1450xld.html ) say something about "100K per second data transfers" which I presume must be 100 kilobits per second considering that MFM is 250 kilobits/s, i.e. 30 KB/s. On the other hand, 100 kbps is not very much for a parallel data transfer considering that Pokey divisor 0 is ~127 kbps. So, has anyone tested how these drives work and is able to quote some precise figures?
  10. @ijor What if they clocked FDC faster or slowed down the rotation speed (yet more)? Are there any bad side effects of slow rotation speed, like too great bit density, which would affect the reliability even if the track size theoretically was big enough to fit more sectors with correct overhead areas?
  11. This works now, thank you. In meantime I noticed another issue in the debugger in 65C816 mode: The console window shows that the instruction LDA ($02,S),Y will reference the address $000118 and fetch $00 from there. But in fact, since B=1, it will reference $010118 and fetch $9B, as it really happens in next step.
  12. Anyway, directories are not recreated when renaming them, otherwise you would not be able to rename a non-empty directory. Timestamps in directories are not updated when the contents of the directory is updated, but I will have to look at this more closely, it seems.
  13. Oops, never saw that! It is probably a bug in the rename procedure in SPARTA.SYS. I will see if it is easily fixable. @phaeron Now everything seems to work perfectly, thank you. I noticed, though, one more issue in the debugger: in 65C816 mode, when the code is running in the memory past the first 64k, and I hit F8, the Disassembly window shows the PC position with the highest 8 bits zeroed. For example, the break address is $3FF9A0, but the disassembly is shown from $00F9A0.
  14. Thank you very much in advance. I do realize this and floppy emulators suffer similar problems. But, since Atari disks are accessed by logical sector numbers, and not by physcal sector number / track number, it is not so important that the disk returns the same geometry it was formatted with, as long as the capacity is the same and numtrk correctly signalizes either floppy disk (= formattable) or hard disk / ramdisk (= unformattable). I guess that it is a failry safe assumption, that, say, if the image is both <= 1440k AND the number of sectors is divisible by 18 for 128/256-byte sectors, or by 9 for 512-byte sectors, this is a floppy image and numtrk should be either 40 or 80. With the special case of 1050 medium density, and with the possible support for 35- or 77-track images. Everything else (> 1440k) is a harddisk. That particular ARC file was created on Atari (the day indicated by the datestamp on the Windows side), copied to an Unix box using PCLink and SIO2BSD (as the server), then copied via FTP from that Unix box onto a Windows box. The timestap has been preserved all the way, just to be changed into the current one in the very last step, copying Windows->Altirra using PCLink and Altirra as the PCLink server. So how can I copy the file in the last step preserving its timestamp? Because if I (conversely) need it updated, I can always use COPY /D. It is misdocumented then. The files have modification times, it is the directories which have creation times.
  15. SDX 4.22 (or 4.2 in general) formatter does not, but SDX 4.4 formatter does.
×
×
  • Create New...