apersson850
Members-
Content Count
1,063 -
Joined
-
Last visited
Content Type
Profiles
Member Map
Forums
Blogs
Gallery
Calendar
Store
Everything posted by apersson850
-
Indeed it has. Mine has a memory module, with 8 K RAM. The unit itself has 8 K RAM too. You can combine them to get 16 K RAM in the machine, you you can use the memory as two independent memory banks. Then you can write one program in internal memory and swap that with the module's memory. Thus you can have two different 8 K RAM setups at the same time. The module also has a battery of its own, so you can have a library of modules, that you have programmed yourself. I also have a cassette player interface for it, so I can save and retreive data. The cassette player interface hooks up to the hexbus port at the back. The calculator has two different modes. One is as a ten data memory scientific calculator with statistic functions, the other as a BASIC computer. You can't share data between the two. It's like two completely different devices, just packaged in the same shell. It still works perfectly. My CC-40 is missing a segment in the LCD, but apart from that, it works too. I only have the base CC-40 unit.
-
Usually. Sometimes from the supplier of some control equipment we may choose to use.
-
I do. Especially when they are different programmers working on the same machine. We have machines with a lifetime of well over ten years. So the original programmer may not even be in the company any longer, and most probably don't remember what he did if he's still there, when you get the task to fix or add something to the code. Robust library routines are very handy then.
-
I have a TI-74, which is similar to the CC-40, but also works as a scientific calculator. It's a more convenient package. That one I've used a bit, but not the CC-40 I also have. (Yes, I have a problem too. But I actually got the CC-40 as a gift from a well known person in the TI community.)
-
As far as I remember, it's the moment between writing the high byte to WDPWA and reading the first byte from VDPRD you can't do in two instructions immediately after each other. When I installed my 16-bit memory expansion, I augmented it with a hardware wait state generation to fix this issue, and that one triggers only on the VDP read data chip select signal in the console. If I run software that doesn't work in fast RAM (like the game Tennis), and then enable this hardware wait state, it works. But it does of course delay every read, not only the first one, so it's less intelligent than correctly written software. But as far as I know, you don't need to delay the write, just the read. I don't have time to check the VDP data book right now, though. However, I see now that a few posts up from here, it says that the write works without delay, since the CPU does a read before it writes, and that wastes enough time.
-
Haha, doing things like this professionally, I hate when people leave out code to cover cases that never happens. They tend to happen as soon as the machine is on the customer's floor...
-
You realize that you are talking about two different things here, right? The requirements for preemptive vs. cooperative task switching are quite different. UCSD Pascal for the 99/4A was the first environment available for that machine, at least as far as I know, that supported concurrency. Only cooperative concurrency, though. I did an attempt to change it to preemptive, but I ran into issues with code which seemed to be unprotected in the unit HEAPOPS, and without the source for the operating system, I found it virtually impossible to correct that. Note that if you run code that reads from the VDP, then you need one instruction between writing the most significant byte of the VDP address and reading from the VDP. Otherwise your code will not work on machines with 16-bit RAM without wait states.
-
In this particular case it's the question about updating the position of the same sprite over and over again, so it makes sense to rewrite the address each time.
-
Hmpf, I got inspired and intended to try the Pascal thing on my 99/4A. But it's obviously been too long time since last time, because I needed to use a different Pascal system disk (the old one broke down, but I had a copy). So I thought I better make a new copy of the copy, so that there's still two. And managed to overwrite and destroy the, as it seems, only disk I had which held everything related to DSR routines and such stuff for my own cards. Now I'm not in the mood for any retro computing any more...
-
Internal 64 K RAM memory expansion
apersson850 replied to apersson850's topic in TI-99/4A Development
It depends on which TI 990 model you look at. The TI 990/10 had more of that stuff than the TI 990/9, for example. And it's the TI 990/9 that the TMS 9900 implements on a chip. The TMS 99000 implements the TI 990/10, which then became the TI 990/10A. TI 990/12 had even more features, like floating point routines in hardware etc. -
No, I think I started early 1983 or so. I didn't have the TI Forth at that time, at least. But sometimes we got things later here in Sweden than in the US.
-
When you buy the p-code card, you get just the operating system. What you can do then is run programs compiled by somebody else. That's it. Then you add a disk with Editor, Filer (another word for disk manager) and various Utilities. Now you can edit source code, copy files and do various extra things, like changing the code type of code files. Then you add a disk with the Pascal compiler. Now you can compile your Pascal source files into executable code files, or compile source written as separately compiled units. Then you add a disk with the Assembler and Linker. Now you can assemble source files into object files. They are still not executable, though. For that you use the Linker. It will resolve the procedure declarations that are external in the Pascal program, and link them to the corresponding procedure declarations in the assembly program. Then you get a single executable code file, which contains both the Pascal and assembly programs you need. Another interesting feature is that assembly programs can be assemled and linked not only to allow them to be relocatable, i.e. that you can load them at any convenient place in memory, but you can actually make them dynamically reloacatable. If you do, the code loader in the OS will not only resolve addresses on loading, but save a reloacating table, which makes it possible to move the code in memory, should you encounter an issue where you run out of stack space, but can solve that by moving code closer to the heap, for example. It's the fact that you can have things like this done by the system, without having to do anything more than declaring the assembly procedure as RELPROC instead of PROC, that makes the p-system so powerful. If you want to do that in Forth, you have to write the whole mechanism for it first. You can chain programs in Extended BASIC, so that you can run larger program than fit in memory, but you have to control the mechanism yourself. With Pascal, you just add segment to procedures you know you don't always need, and then the operating system will roll them in and out of memory, on a need-to-do basis. The 4000 line Pascal program was used to compute the correct piping sizes for dust evacuation systems. Such systems are for example used to get sawdust out of a sawmill. They can become complex enough that a decent manual calculation of pipe sizes could consume a week from an engineer. If you then needed to change anything, you spent several days updating the calculations, since everything affected other things. After keying in the basic layout of such a system, a task which perhaps took a few hours, you could change a value and recalculate the whole thing in two minutes with the 99/4A. And you got the same result for the same system each time. As the calculations includes boring iterative algorithms, where you calculate a possible range of pipe sizes, then test which one works best, someone who did that manually would make shortcuts and get different results for the same system each time he did the calculation. An indication of the portability of the UCSD p-system is that I later ported this to Turbo Pascal 4.0 for use under DOS on a PC. Except for a few system related things, everything was a carbon copy of the program in the TI 99/4A. And it worked.
-
Internal 64 K RAM memory expansion
apersson850 replied to apersson850's topic in TI-99/4A Development
The unusual XOP instruction was intended to implement new instructions in software, when programs designed for processors with these functions implemented in hardware were run on machines that didn't have that hardware. Remember that the TI 990 minicomputer series started as a machine with the CPU implemented across several boards with TTL circuits. When the TI 990/10, which had a TTL implementation for the CPU, was upgraded to the TI 990/10A, which used a TMS 99000 CPU, they went from 5½ boards to one. That single board had several special LSI circuits, most in 64 pin DIL enclosures. So the size of the TMS 9900 chip was nothing out of the ordinary at Texas Instruments at that time. The TMS 9900 does support XOP, but doesn't have any kernel mode. As you wrote, kernel mode was introduced in the TMS 99000. -
Internal 64 K RAM memory expansion
apersson850 replied to apersson850's topic in TI-99/4A Development
Yes, and with the TMS 9995 the internal memory is the only one that's possible to access as 16-bit wide. The external data bus is only 8-bits wide. Are there still TMS 9902 UART ICs available somewhere? E-Bay? I'm asking because while looking at my IC inventory the other day, I realized that I still have a few TMS 9995 and TMS 9901 chips in stock. But no UART intended for CRU use. Reading on this site I'm getting somewhat inclined to do some old-style hardware project again... Communicating with the "thing" is simplest the serial way, though. I could use a generic UART, of course, but they don't fit that nicely into the I/O space of a TMS 9xxx CPU. Regarding "register" access time: When the TI 990/9 was designed, TI decided to go with a memory based architecture since they had developed a memory chip that was just as fast as their CPU technology. Thus they wouldn't gain performance by implementing a conventional limited register set, but gained tremendous flexibility by having a virtually unlimited amount of register files in memory. Today the idea looks stupid, but back in the 1970's it was logical. The TMS 9900 is then an implementation of the TI 990/9 processor on (almost) a single chip. Add the clock generator and you're ready to go. The TMS 9900 was used in the TI 990/4 and TI 990/5 computers. -
Internal 64 K RAM memory expansion
apersson850 replied to apersson850's topic in TI-99/4A Development
If you have both code and workspace in 8-bit RAM, then move both to 16-bit RAM, the speed increase is around 110%. This is quite relevant when you use the system as I frequently did, i.e. wrote assembly to support high level languages. In such cases, it's much easier to use a workspace that you define with your code, as you don't have to adapt to how the high level language you're running under is using fast RAM in the console. Extended BASIC and Pascal have their own ideas about how to handle that memory. BASIC default WS is >83E0, but the p-system uses >8380, for example. If you have the WS in your own code, you don't care, but you also get the slowest memory for everything in the normal configuration. -
No, the p-code system on the TI implements a PME (p-machine emulator) in TMS 9900 native code, where some parts of the code is copied to scratch-pad RAM for optimal speed. The Pascal compiler produces p-code, which is then interpreted by the PME. There's no GPL at all involved in the p-code system. For some reason it seems a lot of people think so. It could be because of the hardware design of the p-code card. When the p-system is running, the PME runs assembly code on the p-code card, in RAM at >8300 and in low memory expansion. There's 12 K ROM on the p-code card, in >4000 - >5FFF. The first 4 K are always the same, the last can switch between two banks. Then there are 48 K GROM on the p-code card too. This is probably the source of the GPL confusion, but these GROM chips simply implement a GROM-disk. This disk, showing up as the OS: volume in the system, contains the entire operating system for the p-system. Here you find files like SYSTEM.PASCAL, SYSTEM.MISCINFO etc., which define how the p-system works. Having them on GROM is actually faster than the traditional implementation, where you read everything from a floppy disk. You can still change the system, in spite of it being in ROM, since if you make a new SYSTEM.CHARAC file and store that on a disk, you redefine the character set used by the system, for example. If such a file is in the system drive on startup, it will override the file that's fixed on the OS: volume. There's also assembly code stored in GROM. This is code that's transferred to low memory expansion on startup, and to some other places in RAM as well. But it's only read during startup. The reasons for having code spread out everywhere are different. Code in RAM at >8300 is there for a speed reason. It's the inner core of the interpreter that runs here. Code in RAM at >2000 - >3FFF is there among other reasons to be able to run when the p-code card is disabled. Since the card is in the expansion box, it must turn off when the computer needs access to RS232 or a disk controller, for example. The interrupt service used by the p-system also runs here. Code in the p-code card saves space in RAM. The brunt of the PME runs here, and also some low-level intrinsics like MOVERIGHT and SCAN. Apart from that, it's true that the PME implements a stack based machine. It's also flexible enough to run p-code from CPU RAM, VDP RAM or p-code card GROM. A lot of the opimizations done within the system are to make all the features work within the space of a 32+16 K RAM machine, not to run at highest possible speed. When a procedure is called, an environment record is created on the stack. Inside this record, all local variables are allocated. There are special p-codes used to address a variable with a certain offset inside this environment record. This is of course less efficient than reading a memory location directly. There are also advantages with this approach, though, but you need to better understand all the capabilities of the system to appreciate them. One thing is recursion. Since local variables are pushed on the stack when a procedure is invoked, and poped when you return from the procedure, only available memory limits how many times a procedure can call itself. Another thing is memory management. Since only global variables are static, p-code programs are dynamically relocatable. As Pascal allows for program segmentation, which means that you can split a large program into segmented procedures, which only need to be resident in memory when they are actually running, code may be moved around in the system at runtime, if the system runs out of memory when attempting to call a procedure that's currently on disk only. The UCSD p-system Pascal compiler also allows separate compilation, using units. That means you can have a library of functions you frequently use, like the sprite functions in this case, and make them available to you just by writing uses sprite in the code. Everything else is automatic. It's mentioned above that the p-system couldn't find the sprite unit unless the compiler disk was in drive #4:. That's because the system was started with the file SYSTEM.LIBRARY in that drive, and there's no library reference file active to tell the system where else to look for it. The p-system is flexible enough to have any number of separate libraries available, even on different drives, but if you have that, you have to write a text file that lists the libraries that are available, including the drive number or disk name where they reside, and register the name of this library map file inside the p-system. Putting everything together, I've not found any language available to the 99/4A to be faster than Pascal. But then I count the whole software development process, not just the time it's executing. I'm not too frequently helped by a program that runs in seconds instead of minutes, if the first one takes weeks to develop and the second only days. The slow program will be ready long before the fast one anyway. There are of course things that require faster speed than Pascal can support. Then it's good that linking assembly routines is fairly simple. I typically develop programs in Pascal only (provided I don't need some thing that can't be done in Pascal), and then, when I know it's working, convert time critical routines to assembly if I feel it's worth the effort. Sometimes you can simply code something using the special intrinsics that area in UCSD Pascal to facilitate the whole operating system to be written in Pascal to improve your own programs. In the specific case here, moving sprites around, I don't know what takes so much time. Maybe the sprite library, which is pretty flexible, has a speed penalty for that. I've hardly ever used sprites in Pascal, so I don't know. At least 99% of the time I've stayed in the default VDP mode, which is 40 characters wide, text only. The p-system loads code in the primary code pool first, if there's space available. For this short program, there definitely is. The alternate code pool runs slightly faster, though. I presume you haven't tried changing the language type from pseudo to m_9900, have you? Doing that forces the system to load the code in the secondary code pool. To wrap it up, the flexibility and operating system support the p-system offers is unique within the 99/4A scope. The largest application I've written for the 99/4A is a bit above 4000 source code lines. That's quite a lot for a system with only 48 K RAM, but it runs.
-
True, but this was even before we got the first Forth designed to run on the TMS 9900.
-
Yes, you could connect two consoles via PIO ports as well. But since the parallel port is bidirectional in simplex mode, you need to make sure you control the data direction. It's different with RS-232, as there are separate channels for transmit and receive, both available simultaneously (duplex mode). Back then, signal propagation timing outpaced the processors, so having eight bits in parallel was faster than serial communication. Nowadays, you are better off using serial, since it's easier to handle the signals on one wire than eight, when the data rate is so high.
-
You do know that "coffee warmer" is the common phrase for the hot area below the cartridge port in a standard TI 99/4A, don't you? It's funny that in Swedish, TI(ny) reads like TI(new), and this is really new in spite of being old, if you get my drift.
-
Here is a thread.
-
Inspired by a post by Matthew, I decided to write this post about the internal memory I once built into my console. Inspired by an article about somebody who installed 64 K RAM in the console, but really only used half of it as a 32 K RAM expansion, I decided to do the same modification. But I thought it was a pity to install 64 K without being able to use all of it. After some thinking, I came up with a design which would fulfil this specification. Provide an internal 32 K RAM expansion, just like a card in the expansion box, but on a 16-bit bus with no wait states. A memory access would be two clock cycles (minimum for the TMS 9900), not six. Allow the 32 K RAM expansion inside the console to be disabled by software. To avoid using any memory addresses for this paging, unused CRU bits (base address >400) in the console would handle the paging. When the internal 32 K RAM was disabled, reading/writing to addresses where it resides would instead reach the PEB in the normal way. Thus a 32 K RAM expansion card in the PEB would be able to co-exist with the internal expansion, bringing the machine to 96 K RAM (CPU access) plus 16 K RAM (VDP access). Splitting the 64 K RAM in 8 segments, where the four segments at the same addresses as internal ROM, DSR space, command module space and memory mapped devices space (plus internal 256 byte RAM) could be mapped in and overlay the console memory. This would allow for example copying the internal ROM to RAM, then enabling the RAM and immediately you have an internal monitor you can change as you like. For other applications, this makes it possible to have a contiguous 64 K RAM address space, if you need that for some purpose. Not allow write-through to the RAM. If you are unfamiliar with the concept, it means that if you read from ROM when RAM is disabled, you read the ROM. But if you write to the ROM when RAM is disabled, you actually do write to RAM. Thus you can copy a ROM location to a RAM location by a MOV @here,@here. Very convenient, but since the 99/4A have memory mapped devices in the address space at >8000 - >9FFF, I didn't want writing to any of these devices to pass through to RAM. Provide hardware to generate an extra wait-state if the program reads from the VDP. This would allow software that doesn't adhere to the rules for inserting extra instructions to delay such reads to still function as intended. You don't have to follow the rules when using the 8-bit slower memory in the PEB, but it's different when running from fast memory. Make it possible to fit the whole thing by piggy-back mount on ICs already in the console. The metallic clamshell should still be possible to mount. At power-on, or after a reset, the machine should start with internal RAM expansion active at >2000 - >3FFF and >A000 - >FFFF. The remaining parts of RAM should be disabled. The VDP read wait state generation is also off after a reset. Block diagram of the expansion. Circuit diagram of the expansion. Photo of the install. The documentation is actually not 100 % accurate any more. I realized after a while that it would be better if I could disable the 8 K RAM expansion separately from the 24 K one. There's one unused gate in U006, so I moved the CRU bit for VDP delay by one, and used two bits for the 32 K RAM section. One for >2000 - >3FFF and the other for >A000 - >FFFF. It would be possible to split the 24 K part in more segments too, but that required more ICs, so I didn't consider it worth the effort. Anyway, this implies that software running with code and WS in 8 K RAM has access to two 24 K RAM segments (if you also have a memory card in the PEB). Or opposite, when code and WS is somewhere in 24 K RAM, you have access to two 8 K RAM pages in low memory expansion. Note that with an expansion like this, you don't have to worry about where you have your workspace. All memory is as fast as the standard RAM pad at >8300.
-
OK, and then you have to find the parts to populate the currently non-existing board. Now I know. Thank you. Many years ago, I had a software idea which I had to abandon due to too little RAM in my machine. In spite of it having 172 K RAM, which was rather a lot in the mid 80's. I would have needed say 256 K RAM or something to make that software feasible. It did run, but untolerably slow due to all the file access I had to do instead.
-
Yes, of course, the internal 64 K RAM I have is for a different purpose. When I stated that it's the best design I've seen so far, I meant among internal-to-console modifications. Since it covers the address space of the CPU, and only that, it's limited to 64 K RAM. But there it allows overlaying existing memory as well as providing a standard RAM expansion, and all this on a 16-bit bus. The SAMS design is instead good at providing an almost unlimited amount of memory (at least seen from the 99/4A horizon), but available on the slower 8-bit bus. The good thing is that my internal memory expansion does allow for disabling itself, so the console can see the external memory expansion. Thus it could co-exist with the SAMS card. Can I buy a SAMS today?
-
It was a long time ago I worked with modifying the 99/4A. I have to dig into the schematics to be sure, but it could be impossible to overlay other areas than >2000 - >5FFF and >A000 - >FFFF from the expansion box. My memory expansion is inside the console, so it can catch the memory decoding and reroute it before it reaches the chips it would normally activate. But, again, I don't remember this for sure. Checking a bit, I see that the SAMS card uses a memory mapper chip, a version of the old 74LS612. Still, it would at least be possible to design the card in such a way, that it would also present a 4K page at >5000 - >5FFF. Although it would make the design of the card more complex, since if you provide that capability, then you need to have 32 K visible at all times (corresponds to normal expansion) and another 4 K only when the card is enabled, since the memory at >5000 - >5FFF would overlap other DSR programs.
-
No, perhaps not in games, as they are frequently optimized for speed and nothing else. When writing code for the 9900, I frequently used a lot of subroutines, since I don't consider it worth saving a few microseconds in execution if I instead can save minutes or hours in code design and maintenance. But I'm a professional in embedded programming, so I'm used to look at the total cost and time spent to get the code executed. That includes the whole software development cycle. Since my software typically control machines, nobody cares about how much margin there is, if it's just fast enough. But the do care if it takes another week to get it done. Of course I do speed optimizing too, but that's frequently on the level of designing the right things to be done with the proper tasks having the correct priorities and cycle times. And taking hardware buffers and such stuff into account, to prevent lockups from happening due to buffered items blocking other things from being processed. It's quite different from considering saving a few clock cycles by avoiding a subroutine call.
