apersson850
Members-
Content Count
1,063 -
Joined
-
Last visited
Content Type
Profiles
Member Map
Forums
Blogs
Gallery
Calendar
Store
Everything posted by apersson850
-
I read your notes. I haven't checked exactly what's going on at startup, within the code that's loaded from GROM to the memory area 2000 - 251E. But if you don't have the SYSTEM.CHARAC file on your root volume when booting, the system starts up with the normal TI character definitions. Now I haven't checked if they have a SYSTEM.CHARAC file on the OS: volume (at least not recently enough to remember), but it seems likely they read the character definitions that are stored in GROM to get the system going. I've mapped the entry points for every single p-code opcode in the PME, so I know where all instructions start being interpreted. There are at least three clusters of code being loaded: Init code, which is then replaced by the 80 character screen memory. Code loaded to 8K RAM, which stays resident during the p-systems execution. Code transferred to scratch pad RAM, for execution speed. All this code comes from a code repository in GROM. Wherever it references the ROM or GROM directly on the p-code card, it has to be changed. All pointer tables for the PME must also be changed, when it runs in a different address space. GROM access via R4 and R5 is only done during startup, that's correct. At least as far as I've found. R0-R7 in the main workspace are scratch registers. Registers R8-R15 have fixed purposes.
-
I'm really happy that I installed 16-bit wide RAM in my console. Thus any memory I use is equally fast.
-
That's a smaller task than one may think, though. The p-system is clever enough to reference GROM via an address in a register in many cases. Hence reading from GROM is *R13, while writing to the address register is with @0402(R13). It's mainly during startup a lot of other shortcuts are used. P-code GROM at E804 is loaded to RAM at 2000 - 251E when booting. That code only runs at startup, though. When running, the p-system uses RAM at 2000 - 277F as the screen memory. Inside that code, there are probably more data being loaded from GROM to RAM to build all the tables the p-system uses when running. I haven't mapped the P-code GROM, but it's reasonable to assume that the part at lower GROM addresses is what makes up the OS: volume. One would have to load the code from GROM into RAM, without running the p-system, and then disassemble it to figure out what's going on there. Since it's assembly code, its own references will not have to be changed, but references to ROM and GROM on the card need to be modified. I don't know if additional code is above or below the GROM address E804. I notice that all blocked devices map to the same BIOS routine, so there has to be different PCB (Peripheral Control Blocks) for different kind of volumes. Just like I had to create a different PCB for a RAM disk, compared to a physical disk.
-
Yes, they are in the DSR space. But if you want to move the whole thing to the cartridge space, you have to take into account the limited memory address decoding available in that port. My point was mainly that you can't move the p-code GROMs all into the normal GROM space in the console, as there are eight p-code GROMs, so the'll occupy the entire space in that case. I presume TI didn't make a module out of it, since their module casings couldn't house 12 K ROM and eight GROM at the same time.
-
So, I did check. The p-code card does call console GPL for floating point operations. It works like this: Pop one float from the stack to FAC. Pop one float to ARG. Load GPLWS and call the desired routine (e.g. floating point divide). Return to Pascal WS. Check for errors (e.g. divide by zero). Push the result in FAC to the stack. This implies that console GROM must be available in the normal way when the p-system is running. Oh, and just to make that clear. I've not said this is impossible, I've just said it's not easy. The fact that only one person have done it, and maybe never completed it, seems to support that statement.
-
The GROM chips on th p-code card contains the data that makes up the OS: volume in the p-system. This is where the operating system is stored. These chips also store assembly code, that's transferred from GROM to RAM on start-up. Since the p-system has to be able to do things even when the p-code card is paged out, some of the operating system has to reside in RAM. The 8 K RAM part is used for this. The user's code is stored in either VDP RAM or the 24 K RAM, depending on where it fits (and code type). The PME (p-machine interpreter) can execute code that's residing in VDP RAM, normal RAM or GROM. Thus code that's in the GROM chips on the p-code card can be executed directly, without having to be moved to other memory first. The PME is of course aware of which memory addresses to access to set the address and read data from these chips. Thus the PME itself will also have to be modified slightly, to allow it to work if these GROM chips are moved to a different base address. Some data in the OS: volume is accessed as ordinary files. The BIOS on the p-code card, that are responsible for accessing this GROM disk, also have to be found and modified. Anyway, you can be certain that there's no GPL code in the p-code card. There is code, but that's p-code. The GPL interpreter isn't running anything here, except perhaps floating point math routines (I haven't checked how thay are implemented, but I have a complete disassembly of the 12 K ROM on the p-code card, so I could check).
-
There has to be some small stub of the memory management software always available, or it would not work. You can't page out yourself, as that's equivalent to software suicide. In general, you can make such memory managers in many different ways. One way is to create a RAMdisk. The disadvantage is that you have to go via the file access protocol for the machine to access data. The advantage is that you can run the same thing on a machine witout this memory, if you instead use a normal disk. It will just be slower. If you want the language you are using to be able to access a variable in additional memory, just as it does in built-in memory, then you have the problem that you either need the memory always active, or you have to modify how that language accesses memory when using variables. It's simpler in languages with built in support for dynamic memory allocation. I did it for Pascal, to use RAM in module space. That gives you 8 K more variable RAM, but looks exactly the same when using the variable inside the program. A separate call procedure was used to create the variable. For Extended BASIC, I wrote a memory expander that used the 8 K RAM as a sequential allocation area, with random access. It worked as described above. You called a store procedure, which gave you a handle back. Via this handle you could read back your variable with a call procedure. When you had filled the memory, you couldn't release anything but all of it in a fell swoop. You could store and recall the whole memory area in a file, though. But this means you have to CALL LINK each time you want to do anything with this memory. More complex algorithms can of course be deployed. If you want to be able to not only allocate memory, but also release it, a simple way is of course the mark/release concept found in early UCSD Pascal implementations. You allocate sequentially, access randomly and release sequentially. Thus if you have allocated space for 100 items, and then roll back to item #50, then you can't any longer use any item in the range #51 to #100. The advantage is that tracking used memory is a simple pointer. Anything above is free. If you want a fully flexible system, where you can allocate and release an arbitrary memory area, in any order, then you have to start keeping track of what's free, even when it's not a contiguous area. This can, for example, be done with a linked list, containing pointers to the free space, size information and a pointer to the next link. Simple and fully variable, but takes time if you have to traverse many links to find an area big enough. You can have a bit map of available areas. That's like how a disk system may work. It does require you make a decision about the size of the allocation units, or the bit map will be overwhelming. No matter what strategy you come up with, you have to handle the fact that you'll get scattered free spaces, if it's possible to release random memory areas. Eventually, you'll run into the situation that there's enough memory to allocate the size you need, but not in a contiguous area. Then you have to either give an error (out of memory), or you have to do a garbage collection, to consolidate the free spaces to bigger ones. You can do a full garbage collection, where all free memory is brought into one single area, or a partial one, where you only move around stuff until you have combined smaller free areas to one, that's big enough to cater for the immediate demand. There's a lot written about virtual and expanded memory handling, so I'll stop here, in this post.
-
I've understood that moving the p-code card to a cartridge, or into the console, isn't exactly easy. So it will perhaps never happen. Anyway, just to remove one possible confusion: There's no GPL used on the P-code card. Reading some posts above I got the impression that some seem to think so.
-
I meant that the p-system will only use the 32 K RAM expansion (well, and the available part of VDP RAM) for paging in and out different program segments. Since no more memory was available at the time when said system was adapted to run on the TI 99/4A.
-
You also have to remember that the memory in a normal TI 99/4A isn't running at full speed. Well, inside the console it is, but not memory in cartridges, DSR or expansion RAM. This has an impact on instruction timing. I've noticed that if both code and workspace is in slow RAM (normal expansion), the execution time will increase by about 110%, compared to if both code and workspace is in fast, 16-bit wide RAM. The most common mix is of course to run code in slow memory and have the workspace in fast memory.
-
One of the main reasons why I did all my software development on the TI 99/4A, when I could choose my platform myself, in Pascal. It does provide exactly that, but only inside the 32 K RAM expansion, of course. Nothing bigger existed when it was designed.
-
im looking for an emulator that does the 16 bit 32k upgrade mod
apersson850 replied to xxx's topic in TI-99/4A Computers
I took a look at the thread about the SAMS card, so now I know how it works. The 74LS612 memory mapper is a handy device. It's of course non-standard, but since my internal memory design allows paging RAM pages of 8 K in or out, and those that are paged out will allow access to whatever is there otherwise (external RAM expansion, console ROM etc.), it could easily co-exist with a SAMS card. Run your program in up to 24 K internal 16-bit RAM (note that this allows you to place both code and workspaces wherever you like - they are equally fast), and open an 8 K window to two SAMS pages at a time. -
im looking for an emulator that does the 16 bit 32k upgrade mod
apersson850 replied to xxx's topic in TI-99/4A Computers
I see. It would work with my kind of internal memory expansion, then, since I can control 8 K banks with CRU-bits. If I disable internal RAM in the standard RAM expansion address range, then the computer will fall back to using the standard 32 K RAM expansion, if there is any. -
Sorry, by "copy" I rather meant "instance". MIne is a genuine Hewlett Packard, acquired in 1983. It's now on its second set of batteries. But my HP 15C is a modern re-run, which I bought a few years ago.
-
Ah, an HP 16C to brighten things up too! My copy of that calculator says Hello!
-
im looking for an emulator that does the 16 bit 32k upgrade mod
apersson850 replied to xxx's topic in TI-99/4A Computers
I don't have any SAMS 1 Meg thing. Why does an internal 32 K RAM prevent SAMS to be used? What's the technical reason? If you have a ROM cart and the kind of internal memory expansion I have, which is really 64 K RAM, it's technically possible to run the cart ROM faster too. I can copy it to my fast RAM, then enable RAM instead of cartridge ROM at the cartridge address space. But it takes the cartridge is no more than 8 K for that to work, and loading the thing may be somewhat tricky. -
48 tracks per inch, 40 track drives work. I have Teac FD 55B, as one example.
-
My own internal (in the console) RAM expansion uses CRU addresses at >400 to enable and disable banks of that memory. I can enable RAM in the DSR space (and everywhere else) with a CRU access based at >400. This is just to prove that such assumptions as the SID Master 99 does are always dangerous. As is my own assumption, that CRU address >400 will not interfere with anything.
-
Yes, I know, but it's the housing of the calculator.
-
The TI 990 system also uses different segments. But the loader for the TI 99/4A doesn't implement them. The relatively small memory may be one reason. The target audience another.
-
That's why I liked it best for program development. A standard program isn't as fast as Forth, but much faster than Extended BASIC, when it executes. But if you time the total time from idea until debugged implementation, then it competes favorably. Since it's easy to convert whatever procedure is needed to assembly, and you can have these assembly routines inside a precompiled unit, it's also a pretty slick procedure to develop a working solution, using Pascal only program and library. If you want to run it many times, you can start converting critical procedures, either in your main program or in the library, or in both, to assembly, as you prefer.
-
Interrupt Service Routine and the RS232
apersson850 replied to InsaneMultitasker's topic in TI-99/4A Development
Yes, at a slower baud rate you always have more time between each character being completely received. But you don't necessarily have more time between the characters, i.e. from the last bit in character n to the first bit in character n+1. You can receive 20 characters per second at 38400 bits/s. You just have comparatively long time between the last bit of character n and the first in character n+1. Of course, normally you want to be as efficient as possible, so you try to throw out the characters as soon as possible. Thus the delay is usually held short. I just wanted to point out that there could be another delay here. If you design both ends of the communicating system, you do have the ability to use a high baud rate (provided the cable is short enough and the UART can handle it), and then make sure the transmitting end doesn't send the new character immediately after the previous one. This can render a higher speed than going down in baudrate and send the characters packed as much as possible. I hope the coffee was good! -
Interrupt Service Routine and the RS232
apersson850 replied to InsaneMultitasker's topic in TI-99/4A Development
Just in case somebody hasn't grasped this: The baud rate, really the number of bits per second that are transmitted, rely to one single character only. When using a baud rate of 9600, it means that the duration of a single zero or one bit is 104 us. This is valid inside a character, which typically involves transmitting eleven bits, of which eight are data. Thus the duration of one character is 1.15 ms. The higher the baud rate, the shorter time between the bits. But it's up to the UART chip to handle this. It will receive the bits and put them together to one character. So far, it doesn't involve the processing capacity of the CPU. It's when the UART has received a character the race starts. It will set an output to tell there is a character to read, and will start receiving the next one, if one more is transmitted. The character that's now ready to read must be read before the next one is assembled. If it isn't, an overrun condition will occur. This can be handled in different ways, but a typical result is that a character is missing. Or an error could be generated. This means that you can have a very high baud rate, but still only send a single character every second. It doesn't make sense, but it's doable, legal and implies no significant load on the CPU at all. What you can't do is have a slow baud rate, like 300, and send/receive many characters per second. At that baud rate, a single character takes so long to send, that you can't do more than 27 per second. The benefit of using interrupts is that you can connect the "I have a character" output from the UART to the CPU's interrupt input, and have the CPU empty the single character buffer, regardless of when it happens. The CPU software doesn't have to repeatedly read the UART to see if there's something. It can do whatever it has to do, and the incoming character will be read when it occurs. The problem with this approach is that to manage high communication speeds, the interrupt service routine must be short and efficient. You can say whatever you like about the interrupt servicing, for external interrupts coming from the expansion box, but it is neither short nor efficient. That's why there's a problem with many characters per second. Not with the 38400 bits/second by itself, but the many characters per second it allows. -
On another level, the p-system can do the same thing, but also allows cooperation between assembly and Pascal programs. The assembler allows definition of procedures and functions that can be called by other assembly programs, but also from Pascal. It also allows you to define data that's accessible from the outside (from Pascal), as well as allow referencing global data, that's declared in a Pascal program, from assembly level. To make this possible does require that the linking is done separately, before loading. Thus there's no linking loader that can handle these kind of cross references. You link the Pascal and assembly programs together and produce a code file, containing them all. Then that's the code file you execute. The p-system does have the same kind of link and load (actually load and link) capability too, but then it's between a Pascal program and separately compiled units. These units are resident in some library file, which is referenced from the main program. The referenced units will then be loaded by the operating system, as much as is necessary to find them, at load time. The code in the units will only be loaded when it's actually used, and can be rolled out from memory again, if other code segments need the space (and the first one isn't used any more, of course). It's also possible to break up both units and your own programs into segments, which will be loaded when referenced.
-
I live approximately 20 km away from where IKEA all started.
