I guess the code could even be mapped in where it was not always in the active memory range, a simple BLWP call would page it in, then it would page itself out and restore itself when it exited.
There has to be some small stub of the memory management software always available, or it would not work. You can't page out yourself, as that's equivalent to software suicide.
In general, you can make such memory managers in many different ways. One way is to create a RAMdisk. The disadvantage is that you have to go via the file access protocol for the machine to access data. The advantage is that you can run the same thing on a machine witout this memory, if you instead use a normal disk. It will just be slower.
If you want the language you are using to be able to access a variable in additional memory, just as it does in built-in memory, then you have the problem that you either need the memory always active, or you have to modify how that language accesses memory when using variables. It's simpler in languages with built in support for dynamic memory allocation. I did it for Pascal, to use RAM in module space. That gives you 8 K more variable RAM, but looks exactly the same when using the variable inside the program. A separate call procedure was used to create the variable.
For Extended BASIC, I wrote a memory expander that used the 8 K RAM as a sequential allocation area, with random access. It worked as described above. You called a store procedure, which gave you a handle back. Via this handle you could read back your variable with a call procedure. When you had filled the memory, you couldn't release anything but all of it in a fell swoop. You could store and recall the whole memory area in a file, though. But this means you have to CALL LINK each time you want to do anything with this memory.
More complex algorithms can of course be deployed. If you want to be able to not only allocate memory, but also release it, a simple way is of course the mark/release concept found in early UCSD Pascal implementations. You allocate sequentially, access randomly and release sequentially. Thus if you have allocated space for 100 items, and then roll back to item #50, then you can't any longer use any item in the range #51 to #100. The advantage is that tracking used memory is a simple pointer. Anything above is free.
If you want a fully flexible system, where you can allocate and release an arbitrary memory area, in any order, then you have to start keeping track of what's free, even when it's not a contiguous area. This can, for example, be done with a linked list, containing pointers to the free space, size information and a pointer to the next link. Simple and fully variable, but takes time if you have to traverse many links to find an area big enough.
You can have a bit map of available areas. That's like how a disk system may work. It does require you make a decision about the size of the allocation units, or the bit map will be overwhelming.
No matter what strategy you come up with, you have to handle the fact that you'll get scattered free spaces, if it's possible to release random memory areas. Eventually, you'll run into the situation that there's enough memory to allocate the size you need, but not in a contiguous area. Then you have to either give an error (out of memory), or you have to do a garbage collection, to consolidate the free spaces to bigger ones. You can do a full garbage collection, where all free memory is brought into one single area, or a partial one, where you only move around stuff until you have combined smaller free areas to one, that's big enough to cater for the immediate demand.
There's a lot written about virtual and expanded memory handling, so I'll stop here, in this post.
Edited by apersson850, Sun Mar 3, 2019 3:55 AM.