Jump to content
IGNORED

SDCC PIC16 C compiler?


zezba9000

Recommended Posts

 

 

A stack is just as you describe. In my generic C# code example I tried to explain C# just makes it look likes its a heap allocation when its not if you come from a C++ background.

Notice my code example in my post above with line:

var result = new MyStruct();// the "new" keyword here when used on a struct isn't actually calling an allocator. A struct will always live on the stack when used in a method scope like this.

New in C# is just stating its new memory which can be on the stack (struct) or heap (class). Structs are always allocated on the stack and a Class is always allocated on the heap. Its just a syntax thing but done this way because C# has a much more powerful object initializer than C/C++

"auto result = MyStruct()" in C is the same as "var result = new MyStruct()" in C#.

 

Ah, another point of confusion on my part, then. In C++, both struct and class mean the same thing, and only change the default visibility for members.

 

So, is it possible to obtain a reference to result above and have that reference persist beyond the life of the call?

Link to comment
Share on other sites

 

Do you follow what I'm getting at?

Yes but I've moved passed this idea. I think its best if we don't use that model I first articulated.

 

Just going to re-post some code here of the "good" ideas so far (that seem to work).

[MemoryPool(MemoryPoolType.Bank_8bit)]// NOTE: all memory lives in this pool
class MyGraphicsData// if this was a struct you would get a compiler error
{
  public byte r, g, b;
  public int x, y, z;// two bytes
}

class NormalObject// if MemoryPool not set, assume main 16 bit pool
{
  public byte r, g, b;// 16 bit from padding but used as 8 bit
  public int x, y, z;// 16 bit int
}

struct SomeStruct
{...}

void Main()
{
  var graphics = new MyGraphicsData();// heap allocated in 8 bit / graphics memory pool
  var normalObject = new NormalObject();// heap allocated in 16 bit memory pool
  var someStruct = new SomeStruct();// stack allocated in 16 bit memory as all stack objects would be (same as doing "auto someStruct = SomeStruct();" in C/C++)
}

// Stuff like ROM memory access could be done with an attribute like:
[MemoryPool(MemoryPoolType.ROM, 0xFFFF)]// addressed by memory location?
static class MyReadOnlyData
{
 public readonly static int x, y, z;// only one readonly instance possible (thus static)
}

// BELOW shows how to define pre allocated objects (statics)
unsafe struct MyStruct
{
 public fixed byte myFixedBuff[16];// C style buffer expressed in C#
}

[MemoryPool(MemoryPoolType.Bank_16bit)]
static class MyClass
{
  public static MyStruct myStruct;// would be allocated in the 16bit memory pool (doesn't use GC)
}
So, is it possible to obtain a reference to result above and have that reference persist beyond the life of the call?

 

No. Only in "unsafe" code could you get a raw ptr to this stack object as you could in C/C++. "result" living on the stack is thought of the same way you would in C/C++.

 

You can do stuff like this in C# 7.3+ though.

struct MyStruct
{
  private int i;

  ref int Foo()
  {
    return ref i;// allows you to return a ref to a struct (same as "return &i;" in C/C++)
  }
}
Edited by zezba9000
Link to comment
Share on other sites

Ah, I think I get what's going on. C# is Heljsberg's baby, so structs work like Delphi's RECORDs, which are always allocated on the stack and passed by value, unless allocated from a typed record pointer.

Yes C# structs ALWAYS get put on the stack unless they live in a class / heap object. Classes ALWAYS get put in the heap unless they're static which makes them compile time objects only. We can use static class definitions (static class MyStaticClass) though to store static structs for pre-allocating global static objects in memory... as globally declaring a variable in C would.

Edited by zezba9000
Link to comment
Share on other sites

Yeah, it's the use of new that really tweaked me. That's a confusing choice on the part of C# to (ab)use new that way.

Gotcha. Structs in C# are an odd feature. They are not classes, and don't act like classes, even though you initialize them like classes and they look like classes. *shrug*

 

dZ.

Link to comment
Share on other sites

So here is another thing to consider. What memory pool should standard CoreLib objects be put in such as List for example?

[MemoryPool(MemoryPoolType.Bank_8bit)]
struct MyStruct
{...}

void Main()
{
  var myStructArray = new List<MyStruct>();// The List object is allocated in "scratch" memory BUT its backing ptr points to memory stored in 8 bit memory
}

The default heap memory pool could be set at compile time. So for LTO Flash, you could set it to use the memory on the cart instead.

Also are there any other systems like the "Intellivision" that split up memory like this?

Link to comment
Share on other sites

So here is another thing to consider. What memory pool should standard CoreLib objects be put in such as List for example?

 

I guess it really depends on what your primary target is: Unexpanded Intellivisions with pure ROM cartridges, or Intellivisions that incorporate some amount of additional RAM to enable expanded programming models?

 

Both JLP and LTO Flash provide the option of a significant amount of 16-bit RAM. JLP provides 8000 words of 16-bit RAM at $8040 - $9F7F, and so makes that RAM pool an obvious target for allocations. It also locks you into that board design, or board designs with comparable expanded memory. LTO Flash, of course, offers a much larger amount of potential RAM, although now you get into bank switching to access all of it, and that presents additional challenges in the programming model.

 

If you're targeting unexpanded Intellivisions, does it make sense to even use something like List<MyStruct>? You have 238 bytes of 8-bit memory to work with, and 112 words of 16-bit memory to work with. (Locations $100 - $101 are reserved by the EXEC for the interrupt vector address, so you only get to use 238 out of the 240 bytes there.) You need at least 8 words of hardware stack in System RAM to take an interrupt, and a typical hardware stack depth is somewhere around 16 to 32 words, so I wouldn't bank on more than 80 - 90 words of 16-bit RAM either.

Link to comment
Share on other sites

 

I guess it really depends on what your primary target is: Unexpanded Intellivisions with pure ROM cartridges, or Intellivisions that incorporate some amount of additional RAM to enable expanded programming models?

 

Both JLP and LTO Flash provide the option of a significant amount of 16-bit RAM. JLP provides 8000 words of 16-bit RAM at $8040 - $9F7F, and so makes that RAM pool an obvious target for allocations. It also locks you into that board design, or board designs with comparable expanded memory. LTO Flash, of course, offers a much larger amount of potential RAM, although now you get into bank switching to access all of it, and that presents additional challenges in the programming model.

 

If you're targeting unexpanded Intellivisions, does it make sense to even use something like List<MyStruct>? You have 238 bytes of 8-bit memory to work with, and 112 words of 16-bit memory to work with. (Locations $100 - $101 are reserved by the EXEC for the interrupt vector address, so you only get to use 238 out of the 240 bytes there.) You need at least 8 words of hardware stack in System RAM to take an interrupt, and a typical hardware stack depth is somewhere around 16 to 32 words, so I wouldn't bank on more than 80 - 90 words of 16-bit RAM either.

Totally, you would never use a GC object if targeting "unexpanded Intellivisions" as you put it. I was thinking in terms of extra memory like the LTO Flash gives me which has plenty of ram for a List<T> object.

Link to comment
Share on other sites

So was talking with my brother and he brought up something. Not sure if this could be used in SDCC or not but...

If you were to change how "sizeof" works and maybe add another one called "sizeofBits" you could handle very odd hardware.

 

So if you were to do this in C, it might look like.

int size = sizeof(char, MEMORY_POOL_8Bit);// would return 1
int size = sizeof(char, MEMORY_POOL_16Bit);// would return 1
int size = sizeof(int, MEMORY_POOL_8Bit);// would return 2
int size = sizeof(int, MEMORY_POOL_16Bit);// would return 1

int size = sizeofBit(char, MEMORY_POOL_8Bit);// would return 8
int size = sizeofBit(char, MEMORY_POOL_16Bit);// would return 16 (maybe 8?)
int size = sizeofBit(int, MEMORY_POOL_8Bit);// would return 16
int size = sizeofBit(int, MEMORY_POOL_16Bit);// would return 16

In C#, it would look like. (Very easy to add this feature into a custom .NET CoreLib)

int size = Marshal.SizeOf<byte>(MemoryPoolType.Bank_8Bit);// would return 1
int size = Marshal.SizeOf<byte>(MemoryPoolType.Bank_16Bit);// would return 1
int size = Marshal.SizeOf<int>(MemoryPoolType.Bank_8Bit);// would return 2
int size = Marshal.SizeOf<int>(MemoryPoolType.Bank_16Bit);// would return 1

int size = Marshal.SizeOfBit<byte>(MemoryPoolType.Bank_8Bit);// would return 8
int size = Marshal.SizeOfBit<byte>(MemoryPoolType.Bank_16Bit);// would return 16 (maybe 8?)
int size = Marshal.SizeOfBit<int>(MemoryPoolType.Bank_8Bit);// would return 16
int size = Marshal.SizeOfBit<int>(MemoryPoolType.Bank_16Bit);// would return 16

Maybe this was answered BUT if I were to use 1 16bit int as two bytes can the CP1610 instructions support processing the first and second half of a 16bit int as if it were two separate bytes?

If so you could save a lot of memory BUT there is no way for pointer arithmetic to work correctly here and I'm sure C frameworks would break (as you have stated @River Patroller). In C# however if you disabled the use of pointers on objects that packed two bytes into one int you wouldn't have an issue here. C# has a struct attribute that lets you define how memory is packed in case you needed to force packing for pointer arithmetic as we could extend it in a custom CoreLib.

Edited by zezba9000
  • Like 1
Link to comment
Share on other sites

Maybe this was answered BUT if I were to use 1 16bit int as two bytes can the CP1610 instructions support processing the first and second half of a 16bit int as if it were two separate bytes?

 

Not transparently. Accessing ROM like that is sort of what we do already, but if you are thinking of allocating two 8-bit variables on a single 16-bit word in RAM, then you'll be in trouble. You would have to pack and unpack the values at runtime, extending sign as necessary.

 

Packing and unpacking and address computation is a bit expensive, especially if you need to do it on every memory access. And let me tell you a dirty little secret of the CP-1610: there is no OR instruction, no MULT, and no built-in sign-extension. We must implement all those in software. Isn't the Intellivision great! :grin:

 

If you're talking about treating the two halves of a 16-bit register as individual bytes, like the x86 does with AH/AL, then no, it is still a single 16-bit word. There is a SWAP instruction that swaps both halves in place, but again, the result is still a 16-bit value as far as the CP-1610 knows, and the status flags are affected from the whole.

 

-dZ.

 

 

P.S. "River Patroller" is like an AtariAge title badge based on number of postings. The user's name is right above that on the darker bar at the top of a post. You are responding to intvnut and I am DZ-Jay. Don't worry, we've all done that. ;)

  • Like 1
Link to comment
Share on other sites

"River Patroller" (aka. intvnut) here... Yeah, this is why I keep guiding discussions on a a C compiler model back toward CHAR_BIT = 16 and sizeof(char) = sizeof(int) = sizeof(void *) = 1. The Intellivision's CPU is a word-oriented CPU, and every word-oriented CPU I've encountered generally goes that direction. The C model breaks down quickly if you stray from it. The 8-bit memory in the Intellivision is an interesting anomaly that really doesn't fit C's model of the world. (Neither does 10-bit ROM.)

 

You can do various things to get to the bytes within a word, as DZ-Jay mentions. It's not cheap, and you need some additional bookkeeping (either in the pointer, or elsewhere) to know which half of the word you're looking at. If a random piece of code needs to examine an arbitrary pointer to discover which half of the word it's looking at, that becomes expensive.

 

For ROM, we commonly pack 8-bit data into 16-bit words, but then use dedicated code to unpack it. Strings and GRAM data are the two most common cases. For GRAM data, since GRAM is usually treated as write-only and is written in blocks of 8 bytes, it suffices to have a dedicated copy loop in assembly. For strings, there's a little more work to get the data on the screen, and sometimes the characters are preprocessed a bit to make the unpacking process a little friendlier. (Rotate left by 3 bits; shift ASCII values down by 0x20.) And, again, there's usually a dedicated unpacking loop in assembly involved to make it efficient.

 

The CP-1610 executes instructions at about 1/3rd the rate of a similarly clocked 6502. The Intellivision has a roughly 0.1 MIPS CPU, and being a product of the mid-1970s, it's missing many affordances.

  • Like 2
Link to comment
Share on other sites

Yeah, this is why I keep guiding discussions on a a C compiler model back toward CHAR_BIT = 16 and sizeof(char) = sizeof(int) = sizeof(void *) = 1.

 

Ok but when we store a C style char in 8 bit memory would we still consider it a WORD even though only 8 bits are usable? C# for example leaves it up to the runtime / platform implementation for what the size of a byte or char is but that aside, conceptually does having something like sizeof(char, MEMORY_POOL_8BIT) help when doing pointer arithmetic? If I was to store a int in 8 bit memory wouldn't that actually be stored as two 16 bit wide WORDs totalling as 32 bits with only 16 bits usable? Thus sizeof(char, MEMORY_POOL_8BIT) = 2.

 

Packing and unpacking and address computation is a bit expensive, especially if you need to do it on every memory access.

 

Ok cool so sounds like packing is bad by default.

 

P.S. "River Patroller" is like an AtariAge title badge based on number of postings.

 

Opps

Edited by zezba9000
  • Like 1
Link to comment
Share on other sites

From the perspective of the CPU, 8-bit RAM is just 16-bit RAM like anything else. The bus and the hardware conspire to zero out the upper bits, but passes it along like any other 16-bit value. In fact, storing a 16-bit value into 8-bit RAM will do just that, push the full 16-bits of which only the lower byte makes it to the actual storage. This is one of the tricks we exploit for packing and unpacking.

 

The compiler needs to treat the CPU as a word-addressable machine, where the word is 16-bits wide. The problem is ... what do we do with all that 8-bit stuff we have lying around, like Graphics RAM and Scratch RAM and other device registers? Traditionally at the lower levels of Assembly Language (and in the somewhat primitive IntyBASIC), we treat them as a special case.

 

It is starting to look like we would have to make extreme adjustments either way: either a bespoke compiler, or a severely constrained programming model.

 

dZ.

  • Like 1
Link to comment
Share on other sites

From the perspective of the CPU, 8-bit RAM is just 16-bit RAM like anything else. The bus and the hardware conspire to zero out the upper bits, but passes it along like any other 16-bit value. In fact, storing a 16-bit value into 8-bit RAM will do just that, push the full 16-bits of which only the lower byte makes it to the actual storage. This is one of the tricks we exploit for packing and unpacking.

 

The compiler needs to treat the CPU as a word-addressable machine, where the word is 16-bits wide. The problem is ... what do we do with all that 8-bit stuff we have lying around, like Graphics RAM and Scratch RAM and other device registers? Traditionally at the lower levels of Assembly Language (and in the somewhat primitive IntyBASIC), we treat them as a special case.

 

It is starting to look like we would have to make extreme adjustments either way: either a bespoke compiler, or a severely constrained programming model.

 

dZ.

So sounds like you're saying a C style sizeof(int) in both 8 bit and 16 bit ram would both return 1 correct? I'm just trying to confirm this 100% as this is very easy to do in a C# compiler without breaking the lang if it was to hard to do in a C one for example as @intvnut keeps pointing out.

 

When you say you treat the 8 bit memory as a special case in IntyBASIC what do you mean by this in terms of how are you treating it? Does IntyBasic let you store a 16 bit int in 8 bit memory for example?

How does IntyBASIC handle the 10 bit keyboard memory, anything special or does it just treat it like it would 16 bit ram when doing arithmetic?

 

Also I could always make my IL2X translater just convert to IntyBASIC but it sounds like IntyBASIC uses frameworks to access stuff like the "scratch" / LTO Flash memory pool or how is this done?

So guess I'm asking how does IntyBASIC handle creating a int16/int8 in one memory pools vs the other?

 

Can you also clarify I'm reading this correctly. The quote below states there are 3 8 bit memory pools... however do any of these memory location types, scratch or ECS have 16 bit sections? @intvnut says graphics is only 8 bit but what about the others?

@intvnut says "scratch" memory is used a lot because it has a lot of space BUT is all that space only considered 8 bit memory OR is a large part of it used as 16 bit memory?

There are up to 3 8-bit memory pools.

  • Scratch RAM at $100 - $1EF. This is intended for game variables and is accessible at all times. 240 total bytes.
  • ECS RAM at $4000 - $47FF. This is only available if the ECS is attached. This is where ECS BASIC stored your BASIC programs and variables.
  • Graphics RAM at $3800 - $39FF. This is more like GPU memory and is primarily owned by the display controller chip (STIC). Games mostly write to this memory to update graphic tile pictures. Reading isn't common outside of certain types of initialization steps.

 

 

Trying to weigh the options here. Sorry for all the questions guys.

Edited by zezba9000
  • Like 1
Link to comment
Share on other sites

When you say you treat the 8 bit memory as a special case in IntyBASIC what do you mean by this in terms of how are you treating it? Does IntyBasic let you store a 16 bit int in 8 bit memory for example?

 

IntyBASIC lets you declare two kinds of variables: unsigned variables in Scratch RAM (8-bit), and unsigned variables in System RAM (16-bit).

 

When I say "special case" it's because all operations are 16-bits, but then get copied to storage. If the programmer opted for an 8-bit variable, it would be truncated.

 

This puts the burden on understanding the selected storage on the programmer. There is no abstraction, and no automatic or implicit conversion other than:

* 8-bit to 16-bit: the variable has upper bits zeroed.

* 16-bit to 8-bit: the upper bits get truncated.

 

No sign extension unless a special directive is used to coerce it.

 

How does IntyBASIC handle the 10 bit keyboard memory, anything special or does it just treat it like it would 16 bit ram when doing arithmetic?

 

Nothing special, just reads it as 16-bits, the same as when accessing Scratch RAM: it's moved into a register, as a 16-bit value with zeroed upper bits.

 

Treating 8-bit Scratch as part of the 16-bit pool would work fantastically, if it wasn't for the fact that it would mislead the programmer into thinking that he was dealing with a flat memory model, and he would then lose bits as they get truncated.

 

That's why the primitive memory models we use on the Intellivision so far is to deal with 16-bits all the time, and have an extra pool of 8-bit RAM that we treat as a special case, knowing what it is and how it behaves.

 

Both intvnut and I suggest that this separation be preserved, if possible.

 

Also I could always make my IL2X translater just convert to IntyBASIC but it sounds like IntyBASIC uses frameworks to access stuff like the "scratch" / LTO Flash memory pool or how is this done?

 

I think you are better off translating to Assembly Language -- or better yet, direct object code if you could.

 

So guess I'm asking how does IntyBASIC handle creating a int16/int8 in one memory pools vs the other?

 

Via syntactic sugar:

 

foo = $FF        ' 8-bits
#foo = $FF      ' 16-bits
foo = #foo * 2      ' Will truncate to 8-bits when stored
That's it. The compiler tracks both variables separate from their own pool, and the programmer deals with them as two discrete types. No magic.

 

Can you also clarify I'm reading this correctly. The quote below states there are 3 8 bit memory pools... however do any of these memory location types, scratch or ECS have 16 bit sections? @intvnut says graphics is only 8 bit but what about the others?

@intvnut says "scratch" memory is used a lot because it has a lot of space BUT is all that space only considered 8 bit memory OR is a large part of it used as 16 bit memory?

 

I don't understand your question.

 

When we say 8-bit vs. 16-bit memory, we means the physical storage. Obviously you could split a 16-bit value into two 8-bit parts and store them in 8-bit storage, but that is not what we mean. We mean that there is no abstraction that performs that split automatically.

 

In the Intellivision jargon "Scratch" memory is 8-bit storage, and "System" memory is 16-bit storage. This is traditionally because the Intellivision designers intended the 16-bit memory to be used by the internal framework and the CPU stack, and the 8-bit memory left mostly free for the programmer to use as a "scratchpad."

 

All that said, in practice, given that 8-bit memory is more abundant (and that a chunk of the available 16-bit RAM is reserved for the CPU stack), we tend to store anything there, including 16-bit pointers.

 

However, the storage access of splitting and splicing the values back and forth has a slight overhead.

 

What we tend to do (in our current primitive tools) is to flatten all 8-bit memory segments into one large pool, and all 16-bit memory segments into another; then treat them separately.

 

Then ROM (for data tables and program code branches) is treated, of course, as a flattened pool of 16-bit storage; even though it is also splintered across multiple segments.

 

Finally, 8-bit Graphics RAM (GRAM) is treated as a special write-only block of "cards," so it's abstracted with routines to write to it directly from 16-bit ROM.

 

Does that answer your question in any way?

 

dZ.

 

 

Sent from my iPad using Tapatalk Pro

Link to comment
Share on other sites

So sounds like you're saying a C style sizeof(int) in both 8 bit and 16 bit ram would both return 1 correct? I'm just trying to confirm this 100% as this is very easy to do in a C# compiler without breaking the lang if it was to hard to do in a C one for example as @intvnut keeps pointing out.

 

The approach I was planning to take in C would be best summed up as "push it back to the programmer." Provide some compiler intrinsics for accessing 8-bit and 16-bit values stored in 8-bit memory, but otherwise don't try to make direct use of it from the compiler. The programmer has to explicitly allocate storage over there and use it via dedicated primitives. That is, some intrinsics such as:

.

unsigned int get_u8(void *p);             // Just a normal load.
unsigned int get_u16(void *p);            // Reads two consecutive locations.
int get_s8(void *p);                      // Performs sign extension to 16 bits.
int get_s16(void *p);                     // Reads two consecutive locations.
void put_u8(void *p, unsigned int val);   // Just a normal store.
void put_u16(void *p, unsigned int val);  // Writes two consecutive locations.
void put_s8(void *p, int val);            // Just a normal store.
void put_s16(void *p, int val);           // Writes two consecutive locations.

.

Yes, some of them seem redundant, but I figure a nice orthogonal set of primitives would at least keep the type system consistent even if some just map to (*p = val) or (val = *p).

 

The C compiler itself would only trust 16-bit RAM for its variables, spill values, etc. Since we do have carts with additional 16-bit RAM, it just seemed easier to start with that model rather than trying to do something more complex. Once we had real world experience with it, then it might make sense to introduce pointer attributes and other things to get fancy.

 

By explicitly not incorporating 8-bit memory into C's model, I don't need to invent new semantics for sizeof(). The programmer has to explicitly acknowledge the packing/unpacking step by calling the appropriate intrinsic.

 

 

When you say you treat the 8 bit memory as a special case in IntyBASIC what do you mean by this in terms of how are you treating it? Does IntyBasic let you store a 16 bit int in 8 bit memory for example?

 

IntyBASIC has a static allocation model for all variables, with 8 bit variables in 8 bit memory and 16 bit variables in 16 bit memory. That's it. It doesn't really have a concept of a pointer. It does let you get the address of a variable, and with that you can PEEK and POKE. But, that's not really the same thing, since the address is typeless. (e.g. there isn't a char* vs. int* in the language, and with PEEK/POKE you "get what you get.")

 

 

How does IntyBASIC handle the 10 bit keyboard memory, anything special or does it just treat it like it would 16 bit ram when doing arithmetic?

 

It doesn't make any provision for the Keyboard Component. I think those are rare enough you can ignore them. The ECS, though, is much more common, and it has ordinary 8-bit RAM (2K bytes).

 

 

Also I could always make my IL2X translater just convert to IntyBASIC but it sounds like IntyBASIC uses frameworks to access stuff like the "scratch" / LTO Flash memory pool or how is this done?

So guess I'm asking how does IntyBASIC handle creating a int16/int8 in one memory pools vs the other?

 

IntyBASIC statically allocates all variables at compile time. There aren't any local variables in IntyBASIC, and it does not really support recursion. (You could try; it won't stop you. But, you have to be very careful.) So, if you declare the variable I to hold a loop counter, that variable I gets bound to an 8-bit address. Wherever you see the variable I in the program, IntyBASIC accesses that 8 bit location. So, you need to be a bit careful GOSUBing from a FOR loop if you're in the habit of using I for your iteration variable.

 

It's a very simple model with a flat namespace for all variables, and static allocation at compile time. Imagine if this were C code with no pointers, and all variables were global.

 

 

Can you also clarify I'm reading this correctly. The quote below states there are 3 8 bit memory pools... however do any of these memory location types, scratch or ECS have 16 bit sections? @intvnut says graphics is only 8 bit but what about the others?

@intvnut says "scratch" memory is used a lot because it has a lot of space BUT is all that space only considered 8 bit memory OR is a large part of it used as 16 bit memory?

 

The 8 bit pools are 8 bit pools. The only 16-bit RAM in the system is the RA-3-9600 System RAM, at $0200 - $035F.

 

Summarizing:

  • $0100 - $01EF is 8-bit memory for programs.
  • $0200 - $035F is 16-bit memory; $0200 - $02EF is BACKTAB (the character buffer), and $02F0 - $035F is for programs.
  • $3800 - $39FF is 8-bit memory for graphics RAM.
  • $4000 - $47FF is 8-bit memory in the ECS, and is available only if the ECS is attached.
Edited by intvnut
Link to comment
Share on other sites

Can you also clarify I'm reading this correctly. The quote below states there are 3 8 bit memory pools... however do any of these memory location types, scratch or ECS have 16 bit sections? @intvnut says graphics is only 8 bit but what about the others?

@intvnut says "scratch" memory is used a lot because it has a lot of space BUT is all that space only considered 8 bit memory OR is a large part of it used as 16 bit memory?

 

Re-reading this question, I think I understand what's being confused here.

 

You can store 16 bit values in 8 bit memory, but you have to use a multi-instruction sequence to do so. IntyBASIC never stores 16 bit values in 8 bit memory on its own. Assembly language programmers store 16-bit values in 8-bit memory with explicit instruction sequences such as:

.

;  Store 16 bit value to 8 bit memory
   MVO   R0, $102  ; store lower half (hardware ignores bits 15:
   SWAP  R0        ; swap halves in the register, so upper half is in 7:0
   MVO   R0, $103  ; store upper half (hardware ignores bits 15:

.

; Load 16 bit value from 8-bit memory
    MVI  $103, R0  ; get upper half into bits 7:0
    SWAP R0        ; put upper half into bits 15:8
    XOR  $102, R0  ; get lower half into bits 7:0

; Or, if the address is in an auto-increment register:
    SDBD
    MVI@ R4,   R0  ; CPU reads from two consecutive locations and does the work

.

Link to comment
Share on other sites

IntyBASIC lets you declare two kinds of variables: unsigned variables in Scratch RAM (8-bit), and unsigned variables in System RAM (16-bit).

 

So no way to do a signed int in IntyBASIC?

 

Does that answer your question in any way?

 

I think so, tnx.

 

The approach I was planning to take in C would be best summed up as "push it back to the programmer." Provide some compiler intrinsics for accessing 8-bit and 16-bit values stored in memory, but otherwise don't try to make direct use of it from the compiler. The programmer has to explicitly allocate storage over there and use it via dedicated primitives. That is, some intrinsics such as:

 

I think for the C lang, this idea sounds pretty good. Although how do you plan to do static allocation in different memory pools with static global structs for example? Add a C style attribute keyword/token to the compiler?

Also do you think this approach might work with the SDCC compiler?

 

The 8 bit pools are 8 bit pools. The only 16-bit RAM in the system is the RA-3-9600 System RAM, at $0200 - $035F.

 

So is it safe to call the extended 16 bit memory the LTO Flash gives you "System" memory?

 

You can store 16 bit values in 8 bit memory, but you have to use a multi-instruction sequence to do so. IntyBASIC never stores 16 bit values in 8 bit memory on its own. Assembly language programmers store 16-bit values in 8-bit memory with explicit instruction sequences such as:

 

This example perfectly illustrates why a C style lang is desirable.

Edited by zezba9000
Link to comment
Share on other sites

Just to clarify, in IntyBASIC, 8-bit variables are unsigned, and 16-bit variables are signed by default. This is because 8-bit variables are read into 16-bit registers without extending their signs, by default; and the 16-bit registers already perform arithmetic over signed values by default.

 

You can coerce them to be the opposite by using the SIGNED and UNSIGNED directive during operations.

 

It's rather primitive and bolted on.

 

The best model is what intvnut suggested with multiple primitive types supported by the compiler to distinguish between 8-bit and 16-bit variables, and each with separate signed or unsigned flavors.

 

dZ.

 

 

Sent from my iPad using Tapatalk Pro

Link to comment
Share on other sites

So no way to do a signed int in IntyBASIC?

 

Actually, IntyBASIC supports both signed and unsigned 8-bit and 16-bit variables. For signed 8-bit variables it does perform sign extension.

.

  SIGNED name[,name]
  
     Indicates the names are signed variables/arrays.
     
     This only affects to the 8-bits variables that are unsigned by
     default.
     
     Note this adds two instructions to extend sign to each 8-bit signed
     variable read, although IntyBASIC will try to optimize them out.
     
     Usually you can develop your programs without using this keyword,
     but it's available if you require it.

  UNSIGNED name[,name]
  
     Indicates the names are unsigned variables/arrays.
     
     This only affects to the 16-bits variables when doing comparisons
     of less or greater than, including FOR statements.

     Note this can add one extra instruction to each comparison, this
     depends on comparison direction.

     Very useful for score routines up to 65535 or to create subroutines
     for values of 32 bits.

.

 

So is it safe to call the extended 16 bit memory the LTO Flash gives you "System" memory?

 

The name for the System RAM comes from the name of the RAM chip itself (RA-3-9600 System RAM).

post-14113-0-81265100-1546817614_thumb.png

 

I've been calling the RAM at $8040 - $9F7F "JLP RAM", since that's the address range my JLP boards provide RAM at. LTO Flash is more flexible than that, but can also emulate JLP.

 

As for Scratch (or Scratchpad), I believe I picked up that name from the service manual.

 

post-14113-0-37577500-1546818017_thumb.png

Link to comment
Share on other sites

I think for the C lang, this idea sounds pretty good. Although how do you plan to do static allocation in different memory pools with static global structs for example? Add a C style attribute keyword/token to the compiler?

 

BTW, I had noticed a typo in my original comment. It meant to say "8-bit memory." For 16-bit memory, normal variable handling applies.

 

At $DAYJOB, I work on a specialized device that has a large pool of memory that is not directly accessible from the CPU. (Well, partly true: Direct writes work, while reads require a DMA operation.) We statically allocate almost everything in that memory at compile time, by declaring a very large struct. We then have a small set of primitives that access variables and structures in that memory by applying offsetof() on that struct. It's worked out quite well.

 

I could see using a similar model in C on the Intellivision for anything I needed to statically allocate. And for games with multiple "phases" that don't overlap in time, I could see using a union of structs to model that.

 

 

Also do you think this approach might work with the SDCC compiler?

 

I don't think so. It understands the sizes of types in terms of the number of 8 bit locations they occupy, and the "1 location == 8 bits" seems to run deep.

 

Now, if you forced SDCC to only use 8-bit memory, including for 16-bit values, then maybe you're onto something. Even the pointer scaling would be correct.

Edited by intvnut
Link to comment
Share on other sites

So was looking at some defines in GCC and noticed "BITS_PER_UNIT". Which char in the GCC directly uses for its bit length.

Ref: http://www.delorie.com/gnu/docs/gcc/gccint_112.html

 

So looked up to see if this value was used in the SDCC and it looks like at least here it is: https://github.com/darconeous/sdcc/blob/master/support/cpp/output.h#L263

 

If you changed "BITS_PER_UNIT" in the GCC or SDCC to 16 might this solve the problem?

Edited by zezba9000
Link to comment
Share on other sites

Also looks like this person was able to re-target the GCC for DSP1600 CPU. http://www.drdobbs.com/retargeting-the-gnu-c-compiler/184401529

 

 

  • BITS_PER_UNIT — On an 8-bit byte machine, such as the x86 family, this would be defined as 8. For a word machine, such as the DSP1600 port I completed, this would be the size of a word on the target machine. In the case of the DSP1600 family, this has the value 16. As you can see, GCC is not locked into viewing a processor as an 8-bit byte machine as is commonly assumed by many compilers I have encountered.
  • BITS_PER_WORD — This macro is the number of bits in a word. For the x86 family, this would equal the value 32. Note that by using the formula

 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...