Jump to content

zezba9000

Members
  • Content Count

    298
  • Joined

  • Last visited

Everything posted by zezba9000

  1. I asked about it on GitHub. Fingers crossed maybe someone has a good answer: https://github.com/drh/lcc/issues/39 From reading other stuff online it sounds like it should be possible but maybe someone who has used LCC before has a better answer.
  2. For LCC if you go here: https://sites.google.com/site/lccretargetablecompiler/lccmanpage Then click on "Docs->Code Generate Interface" if will pull up this PDF: http://drhanson.s3.amazonaws.com/storage/documents/interface4.pdf Here are more directions for how someone made LCC target a 16 bit CPU using that PDF: http://www.fpgacpu.org/usenet/lcc.html Anyway just some interesting options.
  3. I mean you're targeting a very old CPU, using an older compiler like LCC isn't that bad. The online resources not being officially documented is a little silly, although might be able to find a PDF version of the books on the WayBack machine website. Also what about just writing a LLVM backend? http://llvm.org/docs/WritingAnLLVMBackend.html#introduction Then you could just use Clang. LLVM seems to have lots of docs.
  4. Here is an example of someone using LCC to target the PDP-11 16-bit CPU: http://telegraphics.com.au/sw/info/lcc-pdp11.html
  5. From the StackOverflow post. LCC may be the C compiler to use if its much easier to use than the GCC for this type of thing: https://stackoverflow.com/questions/7484466/choosing-cpu-architecture-for-llvm-clang
  6. Also another option is the LCC C compiler: https://en.wikipedia.org/wiki/LCC_(compiler) Its designed to solve the issues we are looking at. Someone used it here for example to target a 16 bit CPU: https://stackoverflow.com/questions/7484466/choosing-cpu-architecture-for-llvm-clang Here is the git repo: https://github.com/drh/lcc
  7. Also looks like this person was able to re-target the GCC for DSP1600 CPU. http://www.drdobbs.com/retargeting-the-gnu-c-compiler/184401529
  8. So was looking at some defines in GCC and noticed "BITS_PER_UNIT". Which char in the GCC directly uses for its bit length. Ref: http://www.delorie.com/gnu/docs/gcc/gccint_112.html So looked up to see if this value was used in the SDCC and it looks like at least here it is: https://github.com/darconeous/sdcc/blob/master/support/cpp/output.h#L263 If you changed "BITS_PER_UNIT" in the GCC or SDCC to 16 might this solve the problem?
  9. So no way to do a signed int in IntyBASIC? I think so, tnx. I think for the C lang, this idea sounds pretty good. Although how do you plan to do static allocation in different memory pools with static global structs for example? Add a C style attribute keyword/token to the compiler? Also do you think this approach might work with the SDCC compiler? So is it safe to call the extended 16 bit memory the LTO Flash gives you "System" memory? This example perfectly illustrates why a C style lang is desirable.
  10. So sounds like you're saying a C style sizeof(int) in both 8 bit and 16 bit ram would both return 1 correct? I'm just trying to confirm this 100% as this is very easy to do in a C# compiler without breaking the lang if it was to hard to do in a C one for example as @intvnut keeps pointing out. When you say you treat the 8 bit memory as a special case in IntyBASIC what do you mean by this in terms of how are you treating it? Does IntyBasic let you store a 16 bit int in 8 bit memory for example? How does IntyBASIC handle the 10 bit keyboard memory, anything special or does it just treat it like it would 16 bit ram when doing arithmetic? Also I could always make my IL2X translater just convert to IntyBASIC but it sounds like IntyBASIC uses frameworks to access stuff like the "scratch" / LTO Flash memory pool or how is this done? So guess I'm asking how does IntyBASIC handle creating a int16/int8 in one memory pools vs the other? Can you also clarify I'm reading this correctly. The quote below states there are 3 8 bit memory pools... however do any of these memory location types, scratch or ECS have 16 bit sections? @intvnut says graphics is only 8 bit but what about the others? @intvnut says "scratch" memory is used a lot because it has a lot of space BUT is all that space only considered 8 bit memory OR is a large part of it used as 16 bit memory? Trying to weigh the options here. Sorry for all the questions guys.
  11. Ok but when we store a C style char in 8 bit memory would we still consider it a WORD even though only 8 bits are usable? C# for example leaves it up to the runtime / platform implementation for what the size of a byte or char is but that aside, conceptually does having something like sizeof(char, MEMORY_POOL_8BIT) help when doing pointer arithmetic? If I was to store a int in 8 bit memory wouldn't that actually be stored as two 16 bit wide WORDs totalling as 32 bits with only 16 bits usable? Thus sizeof(char, MEMORY_POOL_8BIT) = 2. Ok cool so sounds like packing is bad by default. Opps
  12. So was talking with my brother and he brought up something. Not sure if this could be used in SDCC or not but... If you were to change how "sizeof" works and maybe add another one called "sizeofBits" you could handle very odd hardware. So if you were to do this in C, it might look like. int size = sizeof(char, MEMORY_POOL_8Bit);// would return 1 int size = sizeof(char, MEMORY_POOL_16Bit);// would return 1 int size = sizeof(int, MEMORY_POOL_8Bit);// would return 2 int size = sizeof(int, MEMORY_POOL_16Bit);// would return 1 int size = sizeofBit(char, MEMORY_POOL_8Bit);// would return 8 int size = sizeofBit(char, MEMORY_POOL_16Bit);// would return 16 (maybe 8?) int size = sizeofBit(int, MEMORY_POOL_8Bit);// would return 16 int size = sizeofBit(int, MEMORY_POOL_16Bit);// would return 16 In C#, it would look like. (Very easy to add this feature into a custom .NET CoreLib) int size = Marshal.SizeOf<byte>(MemoryPoolType.Bank_8Bit);// would return 1 int size = Marshal.SizeOf<byte>(MemoryPoolType.Bank_16Bit);// would return 1 int size = Marshal.SizeOf<int>(MemoryPoolType.Bank_8Bit);// would return 2 int size = Marshal.SizeOf<int>(MemoryPoolType.Bank_16Bit);// would return 1 int size = Marshal.SizeOfBit<byte>(MemoryPoolType.Bank_8Bit);// would return 8 int size = Marshal.SizeOfBit<byte>(MemoryPoolType.Bank_16Bit);// would return 16 (maybe 8?) int size = Marshal.SizeOfBit<int>(MemoryPoolType.Bank_8Bit);// would return 16 int size = Marshal.SizeOfBit<int>(MemoryPoolType.Bank_16Bit);// would return 16 Maybe this was answered BUT if I were to use 1 16bit int as two bytes can the CP1610 instructions support processing the first and second half of a 16bit int as if it were two separate bytes? If so you could save a lot of memory BUT there is no way for pointer arithmetic to work correctly here and I'm sure C frameworks would break (as you have stated @River Patroller). In C# however if you disabled the use of pointers on objects that packed two bytes into one int you wouldn't have an issue here. C# has a struct attribute that lets you define how memory is packed in case you needed to force packing for pointer arithmetic as we could extend it in a custom CoreLib.
  13. Totally, you would never use a GC object if targeting "unexpanded Intellivisions" as you put it. I was thinking in terms of extra memory like the LTO Flash gives me which has plenty of ram for a List<T> object.
  14. So here is another thing to consider. What memory pool should standard CoreLib objects be put in such as List for example? [MemoryPool(MemoryPoolType.Bank_8bit)] struct MyStruct {...} void Main() { var myStructArray = new List<MyStruct>();// The List object is allocated in "scratch" memory BUT its backing ptr points to memory stored in 8 bit memory } The default heap memory pool could be set at compile time. So for LTO Flash, you could set it to use the memory on the cart instead. Also are there any other systems like the "Intellivision" that split up memory like this?
  15. Yes C# structs ALWAYS get put on the stack unless they live in a class / heap object. Classes ALWAYS get put in the heap unless they're static which makes them compile time objects only. We can use static class definitions (static class MyStaticClass) though to store static structs for pre-allocating global static objects in memory... as globally declaring a variable in C would.
  16. Yes but I've moved passed this idea. I think its best if we don't use that model I first articulated. Just going to re-post some code here of the "good" ideas so far (that seem to work). [MemoryPool(MemoryPoolType.Bank_8bit)]// NOTE: all memory lives in this pool class MyGraphicsData// if this was a struct you would get a compiler error { public byte r, g, b; public int x, y, z;// two bytes } class NormalObject// if MemoryPool not set, assume main 16 bit pool { public byte r, g, b;// 16 bit from padding but used as 8 bit public int x, y, z;// 16 bit int } struct SomeStruct {...} void Main() { var graphics = new MyGraphicsData();// heap allocated in 8 bit / graphics memory pool var normalObject = new NormalObject();// heap allocated in 16 bit memory pool var someStruct = new SomeStruct();// stack allocated in 16 bit memory as all stack objects would be (same as doing "auto someStruct = SomeStruct();" in C/C++) } // Stuff like ROM memory access could be done with an attribute like: [MemoryPool(MemoryPoolType.ROM, 0xFFFF)]// addressed by memory location? static class MyReadOnlyData { public readonly static int x, y, z;// only one readonly instance possible (thus static) } // BELOW shows how to define pre allocated objects (statics) unsafe struct MyStruct { public fixed byte myFixedBuff[16];// C style buffer expressed in C# } [MemoryPool(MemoryPoolType.Bank_16bit)] static class MyClass { public static MyStruct myStruct;// would be allocated in the 16bit memory pool (doesn't use GC) } No. Only in "unsafe" code could you get a raw ptr to this stack object as you could in C/C++. "result" living on the stack is thought of the same way you would in C/C++. You can do stuff like this in C# 7.3+ though. struct MyStruct { private int i; ref int Foo() { return ref i;// allows you to return a ref to a struct (same as "return &i;" in C/C++) } }
  17. A stack is just as you describe. In my generic C# code example I tried to explain C# just makes it look likes its a heap allocation when its not if you come from a C++ background. Notice my code example in my post above with line: var result = new MyStruct();// the "new" keyword here when used on a struct isn't actually calling an allocator. A struct will always live on the stack when used in a method scope like this. New in C# is just stating its new memory which can be on the stack (struct) or heap (class). Structs are always allocated on the stack and a Class is always allocated on the heap. Its just a syntax thing but done this way because C# has a much more powerful object initializer than C/C++ "auto result = MyStruct()" in C is the same as "var result = new MyStruct()" in C#. Yes exactly. I would focus on this example below as it solves the issues we're talking about. Look at that example above it came from in my earlier post. var graphics = new MyGraphicsData();// heap allocated in 8 bit / graphics memory pool As I said before IL2X would have a micro GC for heap allocations that only requires extra 1 byte per allocation. This GC design also uses a defrager so the memory aligns much like stack memory would. This approch would be acceptable for heap allocations. In C# if you wanted to statically allocate objects we could do this as well and choose what memory pool it goes in. Example: unsafe struct MyStruct { public fixed byte myFixedBuff[16];// C style buffer expressed in C# } [MemoryPool(MemoryPoolType.Bank_16bit)] static class MyClass { public static MyStruct myStruct;// would be allocated in the 16bit memory pool (doesn't use GC) }
  18. Ok so knowing this I would say all heap allocations default to the "scratch" memory pool with the option to heap allocate in the processor memory pool of course but sounds like that should be left for stack memory at default. With the attribute model in C# the compiler will know what kind of pointer type to use if needed. If wanted you can add a special case field attribute to specify how a 16 bit number is divided in memory but this sounds very rare. Yes on most all platforms it would be BUT .NET is designed so this can change for specific platforms if needed. On Atari 2600 for example I will probably make char 8 bit even though its 16 bit on a desktop to handle the small memory limits better. On Intellivision C#'s byte = 1 and char = 2 fits perfectly. Byte is conceptually still used a 1 byte and sizeof(byte) still returns 1, its just padded in 16 bit memory. No C# allows you to allocate an array of anything. I thought you were talking about a C style ptr could be accessed like an array in C when its not. Which isn't allowed in C# without using "unsafe" code as shown in my example.
  19. Well in C# a char is 2 bytes Take a look at my last post, I revised the idea from stuff you guys have posted. I think it may solve the issues you brought up.
  20. Looking back to this. Is it safe to say that the built in 8 bit memory on the Intelvision is primarily used for GPU / graphics ram? If so did a separate graphics chip read this memory for blanking? Just wondering. What if any memory that was stored in 8 bit memory could only be done via heap allocations? Or any other external memory pool for that matter. Then there is only one 16 bit stack memory pool, which removes the need for any complexity here. So in C# terms this could be enforced by using a class (as its a ptr) and setting its MemoryPool to 8 bit memory as it would enforce that compiler rule. [MemoryPool(MemoryPoolType.Bank_8bit)] class MyGraphicsData// if this was a struct you would get a compiler error { public byte r, g, b; public int x, y, z;// two bytes } class NormalObject// if MemoryPool not set, assume main 16 bit pool { public byte r, g, b;// 16 bit from padding but used as 8 bit public int x, y, z;// 16 bit int } struct SomeStruct {...} void Main() { var graphics = new MyGraphicsData();// heap allocated in 8 bit / graphics memory pool var normalObject = new NormalObject();// heap allocated in 16 bit memory pool var someStruct = new SomeStruct();// stack allocated in 16 bit memory as all stack objects would be } // Stuff like ROM memory access could be done with an attribute like: [MemoryPool(MemoryPoolType.ROM, 0xFFFF)]// addressed by memory location? static class MyReadOnlyData { public readonly static int x, y, z;// only one readonly instance possible (thus static) } Keep in mind heap allocation in IL2X micro GC system would only take one extra byte. The GC also de-fragments memory.
  21. So correct my thinking if I'm out of line here as working with CPUs at this level isn't something I've ever really needed think about much outside GPU shader programs. As you know, my thought process was there would be two stacks simultaneously and that means two stack pointers NOT one you switch around (I was thinking both stacks had to be manually managed for each memory bank). I guess I was also thinking the CPU was more like an AVR chip that doesn't even have registers (like some Arduino devices). So does the CP-1610 CPU pull memory from both 8 and 16 bit memory location and store it in registers, or just put memory from 16 bit ones into registers? Lets try to clear up this attribute idea a little more than. Given this object. [MemoryPool(MemoryPoolType.Bank_8bit)] struct Vec3 {...} void Main() { var v = new Vec3();// this is a stack object but what memory pool do I live in? (I'm marked as living in 8 bit memory) } So what options do we have here for where this object should be stored in memory? 1) If we have two stacks each with a separate stack ptr then this "v" instance could live in 8 bit memory. When the stack unwinds, it unwinds both the 8 and 16 bit stacks. 2) If we only have one stack that lives in 16 bit memory does this "v" instance get created on the 16 bit stack with memory padding? "MemoryPool" in this case is just a compiler hint. 3) other options?
  22. C# is a statically compile language. The reason for using IL via "Mono.Cecil" vs C# directly via "Roslyn" is you get more C# 8.0+ features for free (at least thats the idea). You could even use VisualBasic or F# if you wanted to. So the project i'm working on called IL2X is a .NET IL translator. It converts IL to other native environments. Its works like a .NET decompiler. Maybe this helps: https://github.com/reignstudios/IL2X In short IL2X could in theory be made to directly translate .NET IL to CP-1610 ASM instructions. Intellivision, Atarti 2600, etc, etc would have a special CoreLib for these platforms. Most Atarti platforms IL2X just translates to CC65 / C89. The nice thing about C# vs C, is C# already has all the tools needed to make this special case C# dialect using attributes. C# has a compiler as a service API called Roslyn and Mono.Cecil to analyze IL code as a service. This boils down to you getting the full-blown C# IDE tools on Win, Mac and Lin, such as MonoDevelop, Visual Studio Code or Visual Studio which all have tons of intellisense features etc.
  23. So I didn't think that example all the way through last night (thinking out loud here a little). I was thinking you could create a struct that could ONLY be used on the stack and couldn't be passed around as a method parameter etc. It would just be syntax sugar. However this syntax sugar could just generate two objects that live in each memory pool and when you pass them as method parameters etc it simply passes two objects around. Although a core issue comes around when you want to use this struct in another struct or class. So for now lets assume this syntax and approach isn't practical. Also FYI C# doesn't allow you to access a struct like "a.x". You would get a compiler error as thats considered unsafe code. You can do this BUT ONLY in C# in a "unsafe" code block, method or struct/class. This actually gives you the ability to make clearer rules as what is valid on intellivision in its odd memory system I think. Stuff like this should work though (if this makes sense, do you see any holes for the intellivision system?). [MemoryPool(MemoryPoolType.Bank_16bit)] struct Vec3 { public int x, y, z;// puts into 16 bit memory public byte w;// puts into 16 bit memory with 8 bit padding public int Foo(int x, byte abc)// auto puts "x" into 16 bit stack and "abc" into 8 bit stack { return x + abc;// pulls from 16bit and 8bit stack writing return value to 16bit stack } [return:MemoryPool(MemoryPoolType.Bank_16bit)] public int Foo2([MemoryPool(MemoryPoolType.Bank_16bit)] int x, [MemoryPool(MemoryPoolType.Bank_8bit)] byte abc)// explicitly puts "x" into 16 bit stack and "abc" into 8 bit stack { return x + abc;// pulls from 16bit and 8bit stack writing return value to 16bit stack } } [MemoryPool(MemoryPoolType.ROM)] struct StorageObject { public int x; } Also here is a quick basic difference between C# and C/C++. struct MyStruct// structs are always pass by copy and can only live on the heap if contained inside a class { public int x, y, z;// In C# nothing is public by default, even in structs. In C everything is public and C++ structs default to public. public MyStruct Foo()// C# is OO so you use normally use methods instead of functions as you would in C++ { // NOTE: "var" is the same as "auto" in C++ var result = new MyStruct();// the "new" keyword here when used on a struct isn't actually calling an allocator. A struct will always live on the stack when used in a method scope like this. MyStruct result = new MyStruct();// same as line above but without "var" syntax sugar keyword. return result; } public void Foo2(ref MyStruct p)// you can pass a struct by ref explicitly if needed (same as "MyStruct& p" in C++) { p.x = 0;// valid C# code p[0].x = 0;// invalid C# code (will get compiler error even though p is a ptr behind the scenes, this is considered unsafe) } public unsafe void FooUnsafe(MyStruct* p)// you can pass a struct by ptr explicitly if needed (same as "MyStruct* p" in C) { p->x = 0;// valid unsafe C# code p[0]->x = 0;// valid unsafe C# code } } class MyClass// classes are always pass by reference and always live on the heap (GC objects in short) { public MyStruct s;// inlined directly into MyClasses memory public MyClass c;// a pointer / reference to another class public Foo(MyStruct someStruct, MyClass someClass) { someStruct.x = 0;// modifying this value is only modifying a parameter copy that lives on the stack as structs are pass by copy someClass.s.x = 0;// this will change "someClasses" value as classes are passing by ref } } // Now if you wanted to make some "functional" like features you can do stuff like... static class MyClassExtensions { public static int GetX(this MyClass self) { return self.s.x; } } class SomeOtherClass { public void Foo(MyClass c) { int x = c.GetX();// this method is defined as an extension method external from the MyClass object } } Hope this helps clear up some basics. For the most part C# is like C++ in general but much more productive as it normally takes a lot less code / files to do the same thing. In either case I'm going to continue work on IL2X and keep the "MemoryPool" attribute idea in mind for intellivision type systems.
  24. I agree with this. A lang should not be considered or convoluted with frameworks and APIs. Those should be modules you can choose and pick from. Also here are some ideas I have for using C# to target Intellivision. From what you have said I think this model I'm suggesting would be highly powerful as it just gives compiler hints rather than forcing types to live in specific memory pools which allows the same game logic code to be shared with Atari 2600/5200, etc. C# allows you to create any custom attribute that can be used on structs, classes, field and methods. Also FYI a char in C# is 2 bytes already. So take the following C# example (NOTE: structs in C# are the same as C) using System; public enum MemoryPoolType { Auto,// auto choose what memory bank to use Bank_8bit,// 8 bit memory on Intellivision Bank_16bit,// 16 bit memory on Intellivision Bank_ROM// ROM cartridge / program memory } // IL2X translator looks at this attribute object at compile time when targeting Intellivision ASM output public class MemoryPoolAttribute : Attribute { public readonly MemoryPoolType type; public readonly int poolIndex; public MemoryPoolAttribute(MemoryPoolType type, int poolIndex) { this.type = type; this.poolIndex = poolIndex; } } [MemoryPool(MemoryPoolType.Bank_8bit, 0)]// give compiler error if any type doesn't fix this rule on Intellivision, ignored on single memory pool systems struct Vec3 { public byte x, y, z; } // this pattern allows you to have an "object oriented" design and access differnt memory pools from a single object struct Vec3 { [MemoryPool(MemoryPoolType.Bank_8bit, 0)] public byte x, y;// puts in 8bit memory on Intellivision, ignored on single memory pool systems [MemoryPool(MemoryPoolType.Bank_16bit, 0)] public short z;// puts in 16bit memory on Intellivision, ignored on single memory pool systems // OR [MemoryPool(MemoryPoolType.Bank_16bit, 0)] public byte x, y;// puts in 16bit memory on Intellivision as 2 bytes per field, ignored on single memory pool systems [MemoryPool(MemoryPoolType.Bank_8bit, 0)] public short z;// puts as 2 bytes in 8bit memory on Intellivision, ignored on single memory pool systems } struct Vec3 { public byte x, y, z;// would auto choose Bank_8bit on Intellivision } struct Vec3 { public int x, y, z;// 32bit types are changed to mean 16bit and sizeof(int) would respect this } struct Vec3 { public int x, y, z;// auto puts into 16 bit memory public int Foo(int x, byte abc)// auto puts "x" into 16 bit stack and "abc" into 8 bit stack { return x + abc;// pulls from 16bit and 8bit stack writing return value to 16bit stack } } class MyObject { public char prefix;// 2 bytes on Intellivision, 1 byte on Atari 2600 } Does that make sense? The idea is expandable to more platforms than just Intellivision. Its general purpose. "poolIndex" is used if there are multiple 16 / 8 bit memory pools to choose from. Heap allocations could also use these attributes. The allocator would just have to keep track of multiple banks for a single class object.
  25. Well because C# is just a better version of C++ in many aspects, if it compiled to odd memory embedded platforms this would be a pretty good option. If not a single memory ownership lang that doesn't need a or use a GC my brother has come up with (kinda like Rust but way easier to use and read) might be best. I just love C# because of all the productive IDE's and tools you get with it. I also have a micro GC option that can run in less than a KB of ram for embedded platforms. Normally you use none GC object though and pre-allocate stuff on the stack but its cool you can make them.
×
×
  • Create New...