Jump to content
IGNORED

GCC for the TI


insomnia

Recommended Posts

For small stuff, I totally agree with Chue...

 

 

void enableRom(int crubase) {
__asm__( 
    "li r12,%0\n\t"
    "sbo 0\n\t"
    : : "i" (crubase) 
  );
}

It's just ugly when you have large code:

 

https://github.com/jedimatt42/fcmd/blob/f4734e4d944016bf83a93d0bbcaba2988d99909d/b2_mds_dsrlnkraw.c#L60

 

I've called 'naked' assembly functions from C many times...

 

enablerom.h

  extern void enableRom(int crubase);
enablerom.asm

    def enableRom

enableRom:
    mv r1,r12           ; r1 - int crubase
    sbo 0
    rt

-M@

Link to comment
Share on other sites

OK, it's a "naked" function I want to do. What's a "clothed" function then? :) I know how painful inlining is.

 

But in case of naked functions, how are arguments mapped to registers or laid out on the stack? Your example suggests they're just mapped to R1, R2, .., but I've seen stack fiddling with R10 in my generated code as well.

 

How can I access C variables, in particular char arrays? How do I return a value?

Link to comment
Share on other sites

I think a 'clothed' function would be where you use inline assembly, and you use the mechanisms for declaring the inputs and outputs, and what registers you play with.

In a 'naked' function, you pass values through registers, and return a value through r1.

Usually what I see going on the stack, messing about with r10 at the beginning of a C function, is allocating space for the variables and structures and preservation space for some registers, and then near the end of the function the stack is popped and restored to the same place. If you don't need to allocate extra temporary space, you do not need to mess with r10

Compile a simple function like

int add(int a, int b) {
   return a+b;
}

Take a look at the generated .s file... make sure it wasn't inlined... You should see it only messes with r1, and r2... it should generate something like a 2 instructions: adding r2 to r1, and return.
That exemplifies the calling convention... After that you just have to think about what your parameters are and where they live.

here, I'll use c parameter names that are named after the registers you'll recieve them in...

extern int naked(int r1, int* r2, char r3, char* r4);

4 values will be passed in registers...

r1 and r3 are values, if a variable is passed, the value of the variable will be put in the corresponding register.
r3 will put that value in the high byte.

r2 and r4 are both 16 bit address parameters... so if you have an int var1, you can pass it's address as &var1 when calling, and the address of var1 will be in the corresponding register.

If you pass a string literal, or a char username[20], as username, username is the address of that character array, so declare the parameter as a char*, and the address will be stuffed into the corresponding register when passed.

If you return an int or address, declare the naked function's return type to be 'int' or 'int *' and store the desired return value or address in r1 before returning.

 

You can mess with R10, subtracting from it, to allocate temp space so you can store registers before messing with them, if for instance you need to store R11 for return somewhere. A function should always return with R10 at the same place it was when you entered the function, otherwise when returning the stack will be all offset incorrectly.

 

The easiest approach to stack allocation is to require the calling functions in C to allocate any data or string buffer space, and to pass them as addresses to your assembly function. Then your function doesn't need to mess with the stack directly, it can mess with those chunks of data by address simply passed to you in positional registers.

 

One thing to note though - and this is just theory - xop which isn't useful on the 4a, and crubit read/writes are the only thing you can't express as C. writing to the sound chip... it's just a memory mapped device... writing to the VDP, again just memory mapped device... I've written DSRLNK in pure C with just an enableROM and disableROM helper as inline assembly - It worked on TIPI cause my DSR makes no assumptions about the console DSRLNK routine being used, but the TIFDC failed cause it expects remnants of the DSRLNK to be left behind in scratchpad... That could have been C, but it was easier to give in and borrow code from Tursi's libti99.

 

-M@

 

 

 

 

 

  • Like 1
Link to comment
Share on other sites

Hey!! :mad:

 

;)

 

but the TIFDC failed cause it expects remnants of the DSRLNK to be left behind in scratchpad... That could have been C, but it was easier to give in and borrow code from Tursi's libti99.

The code you copy there is in turn just a port of the Editor/Assembler's version of DSRLNK. I never got my own working 100% either.

Edited by Tursi
  • Like 2
Link to comment
Share on other sites

Hey!! :mad:

 

;)

 

The code you copy there is in turn just a port of the Editor/Assembler's version of DSRLNK. I never got my own working 100% either.

 

 

Yes, that code I linked to started with your code that lived in libti99, and then I purposefully broke the card search feature so that the crubase is specified instead of enumerated... This allows me to have a copy routine from TIPI's "DSK1." to the TIFDC's "DSK1." in the tool I'm building, where a path can be prefixed with the crubase... Oh, I probably released the simple version called TIPIFM...

 

But it isn't the assembly code that is ugly, but the extra \n\t and quotes and white space in general... I haven't found a style any better than your examples, Tursi... I generally dislike languages inside languages.... Terraform with embeded JSON chunks, javascript / reactjs with jsx pretending to be html but not quite, which javascript expressions swizzled in between, and if you forget it isn't actually html you get bit... SQL inside JAVA... Any server pages technology...

 

This assembly inside C is my idea of a vacation.. :)

 

----

 

Oh, I used another strategy in the TIPI messaging code and accessing it from C... There I pretty much defined my own calling convention... wrote the routines and the sort you BLWP to (sort of, I didn't write a vector, just LWPI and BL myself) ... So parameters just have to be stuffed into well known addresses, in that case I used the first few registers in the GPLWS. Then a little inline assembly to wrap the call...

 

https://github.com/jedimatt42/tipi/blob/c96721a0805eec9eec901e296b7395906148663a/examples/telnet/tipi_msg.c#L63

 

The function above that return a value.

 

 

-M@

Link to comment
Share on other sites

But it isn't the assembly code that is ugly, but the extra \n\t and quotes and white space in general... I haven't found a style any better than your examples, Tursi...

hehe, no need to defend. The only reason I didn't break it out into a separate module is because I did intend it as a temporary. But that's the brilliant part of releasing libti99 as public domain, I can do as much or as little as I like and not feel obligated* to make it better!

 

(* - that's a lie, I always feel obligated...)

 

  • Like 5
Link to comment
Share on other sites

Thanks, Matt! So I'll just pass everything instead of trying to access global variables, and everything is registers unless I have local variables, and I can use R0-R9 as I want? Sounds good.

 

Analyzing more of my generated code, (.s files) R9, R10, R11, R13, R14, R15 should be preserved... if mutated in your function, restore them before returning. The stack is a good place to do that.

 

These are listed in the gcc patch (I just read the patch code for the first time) as non-volatile registers. So, R0 - R8 and R12 are volatile, can be mutated without expectations from the caller.

 

-M@

  • Like 1
Link to comment
Share on other sites

For reference, register usage is explained in this post: http://atariage.com/forums/topic/164295-gcc-for-the-ti/?p=2033241

Also worth looking at the next few posts, which explain the calling convention.

 

Do keep in mind there's been quite a few patches since, so there's a chance it has changed a bit in the meantime.

Edited by TheMole
  • Like 2
Link to comment
Share on other sites

Just getting back into this again... but I'm running into a case here where code that uses "printf" (my own printf from libti99) is being compiled to use "puts", for which I have no implementation. ;) It seems to be recognizing that there are no additional arguments and trying to simplify it. I'm going to look for a switch (and probably just add puts), but in case anyone else runs into that, I had to look at the assembly to see what was going on. ;)

 

There's an addendum to this - it also changes the strings by removing any trailing carriage returns, since puts normally does that for you.

Edited by Tursi
Link to comment
Share on other sites

I'm going to look for a switch...

 

You could try the no-builtin compiler flag. I've got a note in one of my makefiles that says:

 

# -fno-builtin
# Do not substitute (optimize) other functions for printf, e.g. puts or putchar etc.
  • Like 2
Link to comment
Share on other sites

  • 2 months later...

Sorry to bother you people again...
I am trying once again to install GCC FOR TI under Linux (Ubuntu).
I thought export MAKEINFO=missing was the necessary command to get it to work.
Is there an updated page with the instructions that work on a standard not too old Linux distribution?
Has anyone tried to follow the instructions on any recent Linux distribution or Cygwin?

WARNING: `makeinfo' is missing on your system. You should only need it if
you modified a `.texi' or `.texinfo' file, or any other file
indirectly affecting the aspect of the manual. The spurious
call might also be the consequence of using a buggy `make' (AIX,
DU, IRIX). You might want to install the `Texinfo' package or
the `GNU make' package. Grab either from any GNU archive site.
Makefile:3890: recipe for target 'doc/cpp.info' failed
make[1]: *** [doc/cpp.info] Error 1
make[1]: Leaving directory '/media/sf_Retro/gcc-installer/build/gcc-4.4.0/build/gcc'
Makefile:4858: recipe for target 'all-gcc' failed
make: *** [all-gcc] Error 2
=== Failed to build GCC ===

Link to comment
Share on other sites

Sorry to bother you people again...

I am trying once again to install GCC FOR TI under Linux (Ubuntu).

I thought export MAKEINFO=missing was the necessary command to get it to work.

Is there an updated page with the instructions that work on a standard not too old Linux distribution?

Has anyone tried to follow the instructions on any recent Linux distribution or Cygwin?

 

WARNING: `makeinfo' is missing on your system. You should only need it if

you modified a `.texi' or `.texinfo' file, or any other file

indirectly affecting the aspect of the manual. The spurious

call might also be the consequence of using a buggy `make' (AIX,

DU, IRIX). You might want to install the `Texinfo' package or

the `GNU make' package. Grab either from any GNU archive site.

Makefile:3890: recipe for target 'doc/cpp.info' failed

make[1]: *** [doc/cpp.info] Error 1

make[1]: Leaving directory '/media/sf_Retro/gcc-installer/build/gcc-4.4.0/build/gcc'

Makefile:4858: recipe for target 'all-gcc' failed

make: *** [all-gcc] Error 2

=== Failed to build GCC ===

 

This error is caused by the makeinfo version being too recent for the version of binutils and gcc we are compiling. It works on CentOS 6, but not above.

 

If you have Docker installed on your system, you can always just pull the image I created:

 

docker pull cmcureau/tms9900-gcc

Link to comment
Share on other sites

Thanks @cmcureau!
I will try the docker file. How easy is it to use it. I guess I need to map a directory inside the docker image, in order to get it to build binaries in a directory outside the docker image.

But how can I build GCC for TI on a modern Posix system like Cygwin or any recent Ubuntu distribution?
I don't need the doc files, as long as I get can the whole thing compiled and installed.
Some time ago, I had managed by setting the MAKEINFO variable to "missing". This trick no longer seems to work.
The installation instructions on the first page do not mention a solution or the makeinfo issue.

Link to comment
Share on other sites

If I try to install from scratch by using the installer as suggested in the first page, I get a file not found error:
root@brizio-VirtualBox:/home/brizio/attempt1# ./install.sh /opt/gcc4ti99/
Using these patches:



=== Creating output directory ===
=== Getting Binutils sources ===
--2019-05-19 22:57:16-- http://ftp.gnu.org/gnu/binutils/.tar.bz2
Resolving ftp.gnu.org (ftp.gnu.org)... 209.51.188.20, 2001:470:142:3::b
Connecting to ftp.gnu.org (ftp.gnu.org)|209.51.188.20|:80... connected.
HTTP request sent, awaiting response... 404 Not Found

Link to comment
Share on other sites

If instead I try to follow the manual installation I get these errors:
root@brizio-VirtualBox:~# patch -l < /home/brizio/installer_gcc_ti99/binutils-2.19.1-tms9900-1.7.patch
can't find file to patch at input line 4
Perhaps you should have used the -p or --strip option?
The text leading up to this was:
--------------------------
|diff -rupN binutils-2.19.1/bfd/archures.c binutils-2.19.1-tms9900/bfd/archures.c
|--- binutils-2.19.1/bfd/archures.c 2008-08-09 01:35:12.000000000 -0400
|+++ binutils-2.19.1-tms9900/bfd/archures.c 2014-12-03 01:03:53.180005096 -0500

...

Link to comment
Share on other sites

If I try to use the install.sh script on the Ubuntu Linux subsystem in Windows I get:
/bin/bash: line 1: /media/sf_Retro/gcc-installer/build/gcc-4.4.0/missing: No such file or directory
Makefile:3890: recipe for target 'doc/cpp.info' failed
make[2]: *** [doc/cpp.info] Error 127
make[2]: Leaving directory '/mnt/c/Retro/gcc-installer/build/gcc-4.4.0/build/gcc'
Makefile:4936: recipe for target 'install-gcc' failed
make[1]: *** [install-gcc] Error 2
make[1]: Leaving directory '/mnt/c/Retro/gcc-installer/build/gcc-4.4.0/build'
Makefile:2474: recipe for target 'install' failed
make: *** [install] Error 2

Link to comment
Share on other sites

[sort of SOLVED] Please update the first page with the complete list of steps (deps+MAKEINFO fix+info about bin created despite errors,etc.)
Trying again the install script with the Ubuntu Linux Windows subsystem...
If I do export MAKEINFO=missing and then start the script, I get farther but it fails with:
/bin/bash: line 1: /media/sf_Retro/gcc-installer/build/gcc-4.4.0/missing: No such file or directory
Makefile:3890: recipe for target 'doc/cpp.info' failed
make[2]: *** [doc/cpp.info] Error 127
make[2]: Leaving directory '/mnt/c/Retro/gcc-installer/build/gcc-4.4.0/build/gcc'
Makefile:4936: recipe for target 'install-gcc' failed
make[1]: *** [install-gcc] Error 2
make[1]: Leaving directory '/mnt/c/Retro/gcc-installer/build/gcc-4.4.0/build'
Makefile:2474: recipe for target 'install' failed
make: *** [install] Error 2

Edited by Fabrizio Caruso
Link to comment
Share on other sites

Ok, I figured out again that it did work despite the errors.

The correct procedure is:
1. Install libgmp-dev libmpfr-dev tree
2. Set export MAKEINFO=missing (to fix one error about texinfo)

3. Run the script

4. Find files inside <target dir>/bin

 

Could someone please update the procedure with the complete list of steps?
Or maybe fix the script to handle some of them?

Edited by Fabrizio Caruso
Link to comment
Share on other sites

This error is caused by the makeinfo version being too recent for the version of binutils and gcc we are compiling. It works on CentOS 6, but not above.

 

If you have Docker installed on your system, you can always just pull the image I created:

 

docker pull cmcureau/tms9900-gcc

Docker always looks like the simple solution, especially when it comes to library versions, but this is something that really concerns me. I am afraid the way goes towards putting everything in Docker containers, and after some time we end up with dozens of applications containerized in their own zoo of library files, countless versions, subversions, and bug fixes.

 

Yes, it's really comfortable, easy to use ... and we'll regret it sooner or later.

Link to comment
Share on other sites

@mizapf, yes, a docker image is one way to make the life easier but in this case the installation instructions require some guess-work.

Having a docker image makes GCC for TI available to the "masses". The current installation process reduces the number of users to a fraction of the people who may want to give it a try.
A docker image is very nice but installation should also be possible for the "masses".

I am asking to have a more detailed list of steps that are required to install GCC for TI on a recent Linux distribution:
- export MAKEINFO=missing
- list dependencies (at least the name of the packages on the most common distros such as Ubuntu)
- a message that says: it is OK if the script fails with libstdc++ errors

Ideally a better script should handle the MAKEINFO and final error message.

Link to comment
Share on other sites

Docker always looks like the simple solution, especially when it comes to library versions, but this is something that really concerns me. I am afraid the way goes towards putting everything in Docker containers, and after some time we end up with dozens of applications containerized in their own zoo of library files, countless versions, subversions, and bug fixes.

 

Yes, it's really comfortable, easy to use ... and we'll regret it sooner or later.

 

I agree that containers are not the right answer for everything...and certainly not even for most things. But it is the quick way to get someone up and running in a repeatable manner without having to go through the pain of building an older version of gcc. It also saves the end user (read: not an expert that does this kind of thing all the time) to share in the excitement when our new game runs for the first time on the hardware we just built.

 

A better solution would be for someone to patch the makeinfo scripts in both gcc and in binutils. I haven't gotten around to doing that yet, and with my own load I don't think I will for a while. I'd love to be able to port these changes to a later version of the toolchain, but that requires adding even more work as we are basically maintaining our own target at that point.

 

If you have your own cross-compiler build, you probably also have your own set of libraries that you use for your tasks too. These are all external to your compiler tree. That's the same concept used for the container -- you simply mount your source volume and compile. The container doesn't have any other libraries in it as it doesn't need them to compile code -- your code provides the linkage.

 

Just my two cents. :)

Link to comment
Share on other sites

As a temporary solution, could someone please update the first page with the info that makes it clear what has to be done when building GCC for TI from scratch?
I mean things like:
1. dependency on libgmp-dev libmpfr-dev tree
2. export MAKEINFO=missing
3. ignore error libstdc++ and look for the binaries in the target directory

People who bump into this thread we'll all get stuck or slowed down by the fact that some unexpected errors occurs. Many users may thing the whole thing is simply broken and unusable.

Link to comment
Share on other sites

@cmcureau, I agree with you.
Maybe the current libc/conio could also be part of the docker image. So that people could just use the docker image and compile a simple "hello world" example and get going very quickly.
Alternatively, some simple instructions on how to use the docker image with an external lib, could be provided. This should be simple.

The docker image cannot substitute completely a documented installation procedure. It does, however, provide an immediately usable environment to get more people on board.

 

I agree that containers are not the right answer for everything...and certainly not even for most things. But it is the quick way to get someone up and running in a repeatable manner without having to go through the pain of building an older version of gcc. It also saves the end user (read: not an expert that does this kind of thing all the time) to share in the excitement when our new game runs for the first time on the hardware we just built.

 

A better solution would be for someone to patch the makeinfo scripts in both gcc and in binutils. I haven't gotten around to doing that yet, and with my own load I don't think I will for a while. I'd love to be able to port these changes to a later version of the toolchain, but that requires adding even more work as we are basically maintaining our own target at that point.

 

If you have your own cross-compiler build, you probably also have your own set of libraries that you use for your tasks too. These are all external to your compiler tree. That's the same concept used for the container -- you simply mount your source volume and compile. The container doesn't have any other libraries in it as it doesn't need them to compile code -- your code provides the linkage.

 

Just my two cents. :)

Edited by Fabrizio Caruso
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...