Jump to content
IGNORED

Action! Source Code


Recommended Posts

jindroush on atarimax should have all the carts broken down for you...

https://www.atarimax.com/jindroush.atari.org/acarts.html

scroll down to OSS

and down to SDX

 

lots of info tucked into places on that replicated site but it does not look like the complete site with all dumps is there...

Edited by _The Doctor__
Link to comment
Share on other sites

 

...

 

Using banked memory (something anyway) to allow for larger binary files. I think it would be reasonable that the compiler should be able to generate code that goes from say $2000 right through to $B000 (36K). So the hash tables and the symbol table have to go somewhere else.

 

...

 

What other ideas ?

 

I'd love to see a larger symbol table by default. I don't know how doable that is. But it doesn't take much to exceed the symbol table space in Action! (for example, when you also include the runtime library). Supposedly there's a way to expand the symbol table by running a program called BIGST.ACT first, but that never seemed to worked for me.

 

-JP

Edited by JohnPolka
Link to comment
Share on other sites

BIGST just gives you 512 global variables instead of 256. It doesn't actually expand the symbol table space. For that you have to poke RAMTOP I think and reboot the cartridge, something like that. The manual explains how in the section on the SET statement.

 

 

I was looking at the cartridge page, and none of the carts seems very good. The XEGS carts seemed promising because there was an explanation of the ROM layout, but it takes up 16K. That only leaves around 20K for editor space, or code space and that just doesn't seem like a lot. I think the answer will have to be an XE version to just get everything out of the main window. Or something that runs the 65816 in 8 bit native mode and just get the entire cart out of bank zero.

  • Like 1
Link to comment
Share on other sites

BIGST just gives you 512 global variables instead of 256. It doesn't actually expand the symbol table space. For that you have to poke RAMTOP I think and reboot the cartridge, something like that. The manual explains how in the section on the SET statement.

 

...

 

 

Thanks. I looked at BIGST.ACT again and there are instructions for that SET command in the comments of this program. I don't remember if I tried the SET command or not. I will try it the next time I run out of symbol table space.

 

-JP

Edited by JohnPolka
Link to comment
Share on other sites

Sure, if 16k is away for the language, then there is not that much left. Heavy was MS Basic II, Cart plus extension disk, that leaves not very much left for the user. So Alfred, you have a 4K bank switch in mind?

 

I dunno if that's the answer, but say a 32K OSS style cart would sure be better I think than some of these other supercarts.

Link to comment
Share on other sites

Banked carts/banked RAM would always be best used as a (kind of) RAM disk. Limiting the changes just to keep the 8k Window makes no sense and make the code unmaintainable.

The 16k cart version in my download is the first step in that direction. The editor and compile/executale can be easily swapped into the RAM later or be used from the command line.

This way also the source code will always be safe and not go to hell as it did frequently in my case, because a bug wrote stray data across the complete memory.

  • Like 4
Link to comment
Share on other sites

Action! is pretty tied to having a big linear block of ram to which the object code is written. It would be best to stick with that model else you're looking at some major changes to how it works. So I'm thinking it would be best to move as much stuff out of the main ram area so as to keep that big object code buffer. The other idea is to redirect the object code to the XE banks. That would give you the ability to have a huge binary, but I'm not sure how much it breaks the compiler. There's no possiblity of storing any bank offset in the stack frames of the compiler, that would just break everything. What I'm thinking of is just front ending every single STA (QCODE),Y with a JSR to a routine that converts the address to the proper bank and offset. It'd be a lot of work, and I'm not even sure at this point that it's feasible without having a good look at some of the code generator. There's just so much indirection that you really have to trace through everywhere to see what actually is storing to the codebase.

Link to comment
Share on other sites

Hi,

 

Would be interesting to see if access via STA (QCODE),Y was completely linear, or random access.

 

Not quite sure what you mean, but it's mostly linear if I've guessed your question correctly. The compiler emits the object code pretty much directly as it compiles. There is some backpatching during the processing of a statement if it's an OD or the only statement of a FOR loop where the JMP to the top of the loop has to be fixed. I think most of the FI clauses are closed at the end of the procedure, unless the code block was short enough and it could be patched sooner. Other than that, there isn't much else, large arrays are only assigned after all the code has been output, and the larger FI jump are also fixed up during the end.

 

It's why I think prefixing the QCODE operations is possibly a way to go. The problem is QCODE isn't the only way to update the code. Cases like A==+1 are resolved by looking back a couple of stack frames and altering the code via either STKPTR or any of the ARGSx zero page registers. Then there are the large array allocations and the closures for IF/FI which are also done by fetching addresses from the stack frames. So it's an idea, I just don't know how practical it actually will be.

 

I'm really starting to think that making an 816 version that runs native with 8 bit regs is the way to do a first stage. Very little code to change, the cart could sit at $01C000-$01FFFF, the fake memlo could be set to say $0700. So depending on the symbol tablespace you could get close to 40K out of it. The hash tables and the symbol table aren't accessed by many routines, so it's possible they could moved to bank $02, freeing up even more space. If I did that, I'd probably just get rid of the hash tables, make the symbol table say, 30K and use the rest for a binary searchable table.

 

I'm almost done with Six Forks and after that I intend to get serious with my 816 DOS, which I see as a precursor to a lot of other thing. Maybe not a necessary one, but useful nonetheless.

  • Like 2
Link to comment
Share on other sites

Hi,

 

I meant is the output always sequential, e.g. output is written to increasing memory locations sequentially, e.g. output_buffer_start, output_buffer_start+1, output_buffer_start+2, etc. or is it

output_buffer_start, output_buffer_start+20, output_buffer_start+3, etc.

 

If you are just dealing with sequential output, writing it to disk would make sense. If you're handling separate backtracking updates, you could write these updates to another file, and use this intermediate file to update the main file, a bit like a journalling file system.

 

The reason I am so hot on files for output is that I think a ramdisk is a good solution, especially as a memory upgrade is (I guess) the most common upgrade (and probably the cheapest), and it should also still work with a floppy disk. If you are handling bank select logic, you will have to support the different ram upgrades available (tedious), or not support some of them. Off-loading that to a ramdisk/file system might be better.

 

If you want to profile how the sta (QCODE),y writes are working, you could replace this code with code that logs the address being written to (e.g. dump the values of QCODE and y), and see if access via STA (QCODE), y actually is sequential.

 

I guess I mean:

 

1) generate output file + fix-up file

2) update output file using fix-up file

Link to comment
Share on other sites

Hi,

 

I meant is the output always sequential, e.g. output is written to increasing memory locations sequentially, e.g. output_buffer_start, output_buffer_start+1, output_buffer_start+2, etc. or is it

output_buffer_start, output_buffer_start+20, output_buffer_start+3, etc.

 

If you are just dealing with sequential output, writing it to disk would make sense. If you're handling separate backtracking updates, you could write these updates to another file, and use this intermediate file to update the main file, a bit like a journalling file system.

 

The reason I am so hot on files for output is that I think a ramdisk is a good solution, especially as a memory upgrade is (I guess) the most common upgrade (and probably the cheapest), and it should also still work with a floppy disk. If you are handling bank select logic, you will have to support the different ram upgrades available (tedious), or not support some of them. Off-loading that to a ramdisk/file system might be better.

 

If you want to profile how the sta (QCODE),y writes are working, you could replace this code with code that logs the address being written to (e.g. dump the values of QCODE and y), and see if access via STA (QCODE), y actually is sequential.

 

I guess I mean:

 

1) generate output file + fix-up file

2) update output file using fix-up file

 

Ok, in that case, yes, it is sequential. There are various routines to add instructions and they all just update QCODE with how many bytes they just added. As I said in the case of say A==+1 what happens is the compiler will emit something like $AF=$AF+1 and then as a special case check of == it will look back in the frames and find the "A" and then it will update the just created code to change it to A=A+1. Other than stuff like that though, the code is output sequentially 1-4 bytes at a time.

 

Code blocks [$xx $xx] are just copied directly to the output buffer from the source after being converted from Atascii to binary.

Link to comment
Share on other sites

Hi,

 

I've never written a line of Action code, I didn't get the cartridge when it first came out, and I haven't got a disk based version. I've always heard good things about it though, and coding with it has been on my someday/maybe list since getting back into the 8-bit. If I was going to extend the Action compiler, I would try and support generating large executables that use bank selection, so keep track of which function sits in which bank. I would try and abstract bank selection into logical banks, and use a configuration file that would handle selecting the actual physical banks (in terms of byte values for PBCTL (?)).

 

I appreciate that after arguing for compile to ramdisk, generating large executables that use bank selection is going to be a little more complicated to test, e.g. output to ramdisk, copy to floppy, nuke ramdisk, test file on floppy, create new ramdisk, repeat as required, but I am guessing if you want to make very large Action programs, there aren't too many other routes you can take if you are intent on developing on the 8-bit?

 

Not sure how practical this would be, I vaguely remember being able to specify different memory models for various compilers on the PC, depending on how large the executable was going to be, I'm wondering about similar logic for accessing functions and (global/referenced) data in different banks?

 

Not sure how cc65 handles this, which is also on my someday/maybe list, except I did do some Deep Blue C coding bitd.

Link to comment
Share on other sites

>appreciate that after arguing for compile to ramdisk, generating large executables that use bank selection is going to be a little more complicated to test, e.g. output to ramdisk, copy to floppy, nuke ramdisk, test file on floppy, create new ramdisk, repeat as required, but I am guessing if you want to make very large Action programs, there aren't too many other routes you can take if you are intent on developing on the 8-bit?

 

That is exactly what happend to me and why I took over this whole thing. I have an ACTION! project that grew so big, it cannot be handled anymore reasonably without compiling externally and using the RAM under the ACTION! cartridge.

Link to comment
Share on other sites

>appreciate that after arguing for compile to ramdisk, generating large executables that use bank selection is going to be a little more complicated to test, e.g. output to ramdisk, copy to floppy, nuke ramdisk, test file on floppy, create new ramdisk, repeat as required, but I am guessing if you want to make very large Action programs, there aren't too many other routes you can take if you are intent on developing on the 8-bit?

 

That is exactly what happend to me and why I took over this whole thing. I have an ACTION! project that grew so big, it cannot be handled anymore reasonably without compiling externally and using the RAM under the ACTION! cartridge.

What kind of program ? How big do you think it would be if you could finish it ?

Link to comment
Share on other sites

Ultimately you end up compiling from disk and to disk when your program gets big. For instance I have compiled programs that were >20k with the source having graphics 15 screens as code blocks ~24k each. Just the way things work with a 64k 8-bit machine. It shouldn't be seen as a limitation of the cartridge/language but a limitation of the hardware. Good programming practice anyway, I mean who would program in assembler and not save before a compile? FWIW, I used to do the same thing when programming on the MS DOS platform in C.

 

Soon as you commit to programming in modules and from disk, like everything else, it really isn't that bad. You think of the defaults as just something you use for short test code and debugging. Soon as you get 10k of source code + 10k of code it compiles to you've pretty much used up the entire memory map available for compiling from memory and to memory. I have had programs where I could do it but couldn't run it after the compile because the extra memory required would step on the top part of my program.

 

Wetmore did HomePak with program switching for a reason. You just couldn't have all that functionality in memory at the same time.

 

As far as the cart is concerned IMHO 4k with 12k banked switched in 4k blocks is fine. If you needed more room just add 4k switched banks. Something like a lowly 74ls74 gives you 4 bits of latch to work with. If you wanted to get fancy you could do something like a 74ls374 https://www.atarimax.com/jindroush.atari.org/data/acarts/xegs_docs1.jpgit would be easy to convert that design to eight 4k banks. I think one of the features that could be added would be the ability to switch the cart off so you had the full memory map available vs. 32k of banks.

 

Symbol table is annoying. Nothing like having the program working right with the cartridge but then running out of symbol table space when you add the runtime. There are work arounds of course you are really talking a large program with a lot of variables before you even hit the wall. If you know in advance you are going to have a problem it isn't that tough to reuse your variables and edit your RT to only routines you use. IMO this is pretty much standard programming practice too. Just declare your loop counters as global variables and treat them accordingly. You seldom have need for more then a couple of 8 bit and 16 bit volatile values anyway.

  • Like 4
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...