Jump to content

elmer

Members
  • Content Count

    295
  • Joined

  • Last visited

Community Reputation

560 Excellent

1 Follower

About elmer

  • Rank
    Moonsweeper

Profile Information

  • Gender
    Male

Recent Profile Visitors

4,532 profile views
  1. Thanks! It certainly looked like the 99105 was deliberately designed to allow the use of the old 3MHz CRU chips, but it's wonderful to have that confirmed.
  2. Yes, the patches are source code patches to the original binutils and gcc source code. They aren't incremental, you just apply the latest patches to the original GNU source archives. The instructions that you're following are from 2012 ... it looks like "gcc-installer.tgz" script was written some time after that. "gcc-installer.tgz" expands into the "install.sh" shell script, which does all of the downloading/patching/build for you. I suggest that you try building the compiler using that script. If your really want to do your own extract/patch/configure/build process, then I suggest that you follow jedimatt42's build instructions (which look as though they also do some patching to get things to build with the gcc10 compiler).
  3. I've never tried to build tms9900-gcc before. Which additional patches are you referring to?
  4. I have another question for you ... It looks to me like just dividing the TMS99105's 6MHz clock by two to generate a 3MHz clock for the TMS9901 and TMS9902 *might* be enough to interface both chips to the TMS99105 without any extra logic, and without needing to buy the faster TMS9901NL-40. Is that what you found with your board, or did you have to add an extra wait-state to each TMS9901 and TMS9902 access (beyond the 1 automatic wait-state that is part of the TMS99105's LDCR and STCR instructions)?
  5. It's been years since I used cygwin, because the whole cygwin project has been rather eclipsed by the developers of the mingw32/msys2 project, but if I remember, cygwin doesn't always put the directory of its DLLs into the current path. Have you searched you cygwin directories for the missing cyggmp-3.dll? If you can find it, then copy it into the same directory as your tms9900-gcc.exe file. Like you, I still use Win7, and I just tried building tms9900-gcc with msys2, but it failed with some errors. This is because every new version of the GCC compiler finds more and more warnings/errors in the older binutils and gcc sources, and the old build scripts fail unless the old source is fixed, or the build scripts are modified. I am having exactly the same problems compiling older versions of GCC with the new GCC 10 compiler on both Windows (msys2) and Linux. 😞
  6. Your experience is similar to mine, in that I have no past history with the TI99/4A or Geneve, and have no paticular interest in keeping compatibility with them. I remember seeing the Cortex issues of ETI in W.H.Smiths, avidly reading the details, and then lusting after the machine, even though I couldn't afford it. Somehow I got hold of a TMS9995 Data Manual, and I was absolutely fascinated by the architecture. Then I became even more fascinated when I read the TMS99105/TMS99110 Data Manual. But alas, I had neither the spare cash, nor the electronic skills at the time, and the fascination never led to any action. Now that SMT Prototyping services are becoming affordable, it is fun to think of how a Cortex successor might be designed, especially since Matthew Hagerty's amazing work on the F18A-MK2 would seem to provide an excellent solution to the otherwise difficult video portion of such a machine.
  7. If you're following Start's example of driving the EPROM and SRAM chips' /OE lines with the CPU's /RD signal, then that might be where your problem is. Taking a look at the TC551001BPL-70's datasheet, it says that the /OE Access Time is 35ns, the same as the 55ns SRAM that I can get. Add another (generous) 7ns for the signals to propagate throught the 74ALS645, and you've got 42ns before the CPU can see the valid data. According to the TMS99105 data sheet, the time between /RD and when the CPU expects the data to be valid, is Tded = ((1/2 Tc2) - 63) = (83 - 63) = 20ns. If you look carefully again at Stuart's website, it says that he used a W24257AK-20, with a /OE Access time of 10ns ... which does meet the TMS99105's timing requirements. <EDIT> This timing condition is probably why some of the examples in The 99000 Microprocessor book seem to suggest only using the /RD signal to activate the 74ALS573 / 74ASL645 data buffers, and not the memory chips' /OE signals.
  8. I completely agree with the joy of using old parts at the heart of the system (especially the CPU), and keeping the modern stuff as convenient replacements for logic and memory chips. Another thing that I think of as fair, is replacing old-and-cantankerous serial port hardware with something modern like an FT245RL, which gives you essentially the same functionality, but is much easier to use. From what I can see, the first challenge with trying to design a simple system using the TMS99105, is going to be running a no wait-state memory cycle, with only 85ns from ALATCH-hi to needing to having the data available at the CPU's pins. That isn't too bad if you're willing to buy/solder a 25ns-or-less SOP/TSOP package chip onto your circuit board, but it seems a bit more challenging if you're looking at the modern SMT PCB prototyping companies, who seem to be more likely to only offer 55ns 5V SRAM parts. Then again, *if* you can design for the 55ns memory speed, you can take advantage of the wide availability of dirt-cheap 55ns SST39SF040 flash ROM parts. If I am understanding the datasheets correctly, then the 74LS612 is going to need some really fast SRAM to reliably run the TMS99105 with no wait-states. Its tAVQV1 time of 39-79ns would seem to push the limits of your timing, especially with 74ALS573 buffers between it and the CPU, with 3-18ns delays on both the address and data. If you then add in any more decoding logic between the 74LS612 and the ROM/RAM, or any card-to-card buffering, then things would seem to get pretty dicey.
  9. Thank you gentlemen! It is very interesting to hear your different schemes and designs for partitioning memory, and I suspect that you are both creating systems that are far more ambitious and complex than I would attempt. FarmerPotato: As far as the EPM7xxx and ATF15xx series chips, yes both offer buried flip-flops. It is definitely one of the big advantages in going for a CPLD that offers more capability than the 22V10 or 16V8. You might find that the 44-pin PLCC packaged of the 1504 offer you a decent compromise beween board-space usage and added capabilities. Jimhearne: Hahaha ... you're really going through a lot of different CPLD chips there! I can't find any mention of an EPM8000 series of chips, perhaps you mean the old 5V FLEX 8000 series of FPGAs? You'll definitely get more capability, but then you will also have to deal with adding a chip to hold the FPGA's configuration bitstream, and possibly some new development hardware so that you can write the bitstream to that chip.
  10. Yes, it sounds like you do have a lot of chips in your memory path! That's where (I hope), a CPLD could help, especially Atmel's still-in-production ATF1508, which offers a few extra tricks over the older Altera EPM7128S, despite being basically compatible. IIRC, with the 22V10 you only have 10 flip-flops for a bank register ... may I ask how you've layed out your memory mapping within the CPU's 64KB? Ah, OK, so you are using multiple address latches, but mainly to save pins on the CPLD, and not because it just couldn't actually handle all the latching. And you're using some of the CPLDs flip-flops to create mapping registers for memory paging. Did you decide to use 4 pages of 16KB, or 8 pages of 8KB ... or am I wrong, and you picked a different scheme?
  11. Do you mind if I butt in and ask a question about your processor board design, because I am toying with the idea of designing/building a 99105 SBC myself? Are you using the Altera MAX7000 (EPM7160S) to both directly latch the address, and also provide the paged-memory-mapping at the same time? If I am understanding "The 99000 Microprocessor" book (and the data sheets), that seems to be the only way that I can imagine you using 70ns SRAM and still being able to run at a full 6MHz external clock rate with no wait-states.
  12. After taking a look at DX0 decompressor in the bitfire project, I decided to do some more optinmization work on my DX0 decompressor, and to optimize the code for the fact that match and literal lengths are almost-always less than 256 bytes. The result is an approx 9% improvement in decompression speed, at a cost of adding 4 bytes of code, so the decompressor is now 196 bytes long. Tobias's bitfire decompressor is between 0% and 3% faster to decompress, depending upon the data file, but it is 15% (30 bytes) longer, so I'm happy with the tradeoff. I have applied the same optimization to my LZSA1 and LZSA2 decompressors, so while the DX0 decompressor is better than before, it is still 25% slower than LZSA2. zx0_6502.mad
  13. The core idea of LZS, which is first Adaptive-Huffman coding the lengths and offsets in hundreds of test files, and then coming up with a Static-Huffman coding scheme that averages out the results from all of those files so that you can avoid the time-consuming Adaptive-Huffman step ... well that is basically where most of the different LZ-style compressors came from in the late 80s and early 90s, just using different coding schemes. When you do the testing, you find that short matches, or short runs of unmatched-literals, are far more common than long runs. You also find that match offsets are more commonly close to the current data than far away from it. This leads to the kind of encoding schemes that you commonly see today (for 8-bit and 16-bit micros). While I encourage you to have fun experimenting with creating your own format, because you will learn a lot ... you will find that you'll have a very hard time beating LZSA1 and LZSA2, which are pretty optimized versions of that scheme. Better compression seems to need a slightly more adaptive solution, and Huffman is one way of doing that, but it has decompression-speed and code-size penalties that ususally aren't worth the trouble on old computers. Elias gamma coding is another scheme that does seem to offer worthwhile benefits, as show by the aPLib and ZX0 compressors. Improvements beyond those tend to involve some really clever, but slow, techniques that just aren't well suited to run on older hardware. If you are writing your compressor to run on the Atari itself, then some of the compression speedup techniques just won't be available to you because of the limited memory size. If you are writing your compressor to run on a modern PC, then I suggest that you take a look at Jørgen Ibsen's BriefLZ compressor, and note its use of hash chains for finding match repeats, which seems to be the 1st step in speeding up a simple LZ-style compressor.
  14. I've now written, and experimented with optimizing, a decompressor for ZX0 ... and it's an interesting format. In comparison to the Z80, our 6502's branches are faster, but the lack of registers hurts us when it comes to all of the bit-shifting that is used in ZX0's Elias-gamma coding. That means that the loop-unrolling seen in the ZX0's Z80 decompressors doesn't really help us much on the 6502, especially in comparison to the increase in code size. Heck, even simple inling of some of the gamma decoding doesn't help much, although optimizing the gamma decoding loop itself did bring a worthwhile speedup (i.e. %age speed gained was at least as good as %age increase in code size). Just like my previous decompressors, the code has been written to specifically allow for decompression from banked cartridges, or the Atari's banked memory. The decompressor is 192 bytes long, and since it is so small, I'm not really bothering to provide a build option for speed/size. Testing shows that ZX0 is about 15% faster to decompress than aPLib, but only about 5% faster than the "enhanced" aPLib format that Emmanuel Marty's APULTRA supports (with the "-e" flag). Then again, the DX0 decompressor is 78 bytes shorter than my "enhanced" aPLib decompressor, so I have to conclude that DX0 is a pretty impressive format! In comparison with LZSA2, ZX0 is about 30% slower to decompress, which is pretty good, but it makes me think that there will still be times where a programmer would choose to use LZSA2 or LZSA1 for their faster decompression speed. zx0_6502.mad
  15. I guess that you are technically correct, but personally, I classify all of those fixed-size coding variants of "either a literal, or a match within a fixed-size window" as being in the same category as "LZSS". There were dozens of minor variants of the same scheme used by different development teams in the late 1980's and most of the 1990s. Yes, LZ4, as an example of variable-size coding, will do better than those ancient compression schemes ... at the cost of slightly slower decompression. As you have shown with your excellent LZSS-SAP program, even those ancient compression schemes still have their place in modern development on these old computers that we love.
×
×
  • Create New...