Jump to content

zezba9000

Members
  • Posts

    302
  • Joined

  • Last visited

Everything posted by zezba9000

  1. IntyBASIC isn't a portable lang and for me this isn't usable. I want to be able to write code in C (preferably) so 90% of game logic etc can be shared with CC65 for other Atari platforms for example. My personal project is actually getting C# to run on legacy platforms and the primary target for making that happen was to target C. Besides this C has a much smaller learning curve as most people already know how to read its syntax. With my project IL2X I could translate .NET IL to IntyBASIC but it sounds like a better idea to just target ASM (or C if that ever happens).
  2. Looking at LCC "token.h" it looks like you might just be able to force a "char" token to a "INT" symbol. https://github.com/drh/lcc/blob/master/src/token.h
  3. @ksherlock in GitHub suggest changing the LCC lexer to do "#define char int": https://github.com/drh/lcc/issues/39 So in short a "char" just gets compiled out as a "int".
  4. Ic, so unless you want to do the same hacks GCC is the only option for a C compiler. In either case going to continue IL2X as I can force the concept of "sizeof(byte) = sizeof(int) = sizeof(IntPtr) = 1" in C# terms.
  5. I asked about it on GitHub. Fingers crossed maybe someone has a good answer: https://github.com/drh/lcc/issues/39 From reading other stuff online it sounds like it should be possible but maybe someone who has used LCC before has a better answer.
  6. For LCC if you go here: https://sites.google.com/site/lccretargetablecompiler/lccmanpage Then click on "Docs->Code Generate Interface" if will pull up this PDF: http://drhanson.s3.amazonaws.com/storage/documents/interface4.pdf Here are more directions for how someone made LCC target a 16 bit CPU using that PDF: http://www.fpgacpu.org/usenet/lcc.html Anyway just some interesting options.
  7. I mean you're targeting a very old CPU, using an older compiler like LCC isn't that bad. The online resources not being officially documented is a little silly, although might be able to find a PDF version of the books on the WayBack machine website. Also what about just writing a LLVM backend? http://llvm.org/docs/WritingAnLLVMBackend.html#introduction Then you could just use Clang. LLVM seems to have lots of docs.
  8. Here is an example of someone using LCC to target the PDP-11 16-bit CPU: http://telegraphics.com.au/sw/info/lcc-pdp11.html
  9. From the StackOverflow post. LCC may be the C compiler to use if its much easier to use than the GCC for this type of thing: https://stackoverflow.com/questions/7484466/choosing-cpu-architecture-for-llvm-clang
  10. Also another option is the LCC C compiler: https://en.wikipedia.org/wiki/LCC_(compiler) Its designed to solve the issues we are looking at. Someone used it here for example to target a 16 bit CPU: https://stackoverflow.com/questions/7484466/choosing-cpu-architecture-for-llvm-clang Here is the git repo: https://github.com/drh/lcc
  11. Also looks like this person was able to re-target the GCC for DSP1600 CPU. http://www.drdobbs.com/retargeting-the-gnu-c-compiler/184401529
  12. So was looking at some defines in GCC and noticed "BITS_PER_UNIT". Which char in the GCC directly uses for its bit length. Ref: http://www.delorie.com/gnu/docs/gcc/gccint_112.html So looked up to see if this value was used in the SDCC and it looks like at least here it is: https://github.com/darconeous/sdcc/blob/master/support/cpp/output.h#L263 If you changed "BITS_PER_UNIT" in the GCC or SDCC to 16 might this solve the problem?
  13. So no way to do a signed int in IntyBASIC? I think so, tnx. I think for the C lang, this idea sounds pretty good. Although how do you plan to do static allocation in different memory pools with static global structs for example? Add a C style attribute keyword/token to the compiler? Also do you think this approach might work with the SDCC compiler? So is it safe to call the extended 16 bit memory the LTO Flash gives you "System" memory? This example perfectly illustrates why a C style lang is desirable.
  14. So sounds like you're saying a C style sizeof(int) in both 8 bit and 16 bit ram would both return 1 correct? I'm just trying to confirm this 100% as this is very easy to do in a C# compiler without breaking the lang if it was to hard to do in a C one for example as @intvnut keeps pointing out. When you say you treat the 8 bit memory as a special case in IntyBASIC what do you mean by this in terms of how are you treating it? Does IntyBasic let you store a 16 bit int in 8 bit memory for example? How does IntyBASIC handle the 10 bit keyboard memory, anything special or does it just treat it like it would 16 bit ram when doing arithmetic? Also I could always make my IL2X translater just convert to IntyBASIC but it sounds like IntyBASIC uses frameworks to access stuff like the "scratch" / LTO Flash memory pool or how is this done? So guess I'm asking how does IntyBASIC handle creating a int16/int8 in one memory pools vs the other? Can you also clarify I'm reading this correctly. The quote below states there are 3 8 bit memory pools... however do any of these memory location types, scratch or ECS have 16 bit sections? @intvnut says graphics is only 8 bit but what about the others? @intvnut says "scratch" memory is used a lot because it has a lot of space BUT is all that space only considered 8 bit memory OR is a large part of it used as 16 bit memory? Trying to weigh the options here. Sorry for all the questions guys.
  15. Ok but when we store a C style char in 8 bit memory would we still consider it a WORD even though only 8 bits are usable? C# for example leaves it up to the runtime / platform implementation for what the size of a byte or char is but that aside, conceptually does having something like sizeof(char, MEMORY_POOL_8BIT) help when doing pointer arithmetic? If I was to store a int in 8 bit memory wouldn't that actually be stored as two 16 bit wide WORDs totalling as 32 bits with only 16 bits usable? Thus sizeof(char, MEMORY_POOL_8BIT) = 2. Ok cool so sounds like packing is bad by default. Opps
  16. So was talking with my brother and he brought up something. Not sure if this could be used in SDCC or not but... If you were to change how "sizeof" works and maybe add another one called "sizeofBits" you could handle very odd hardware. So if you were to do this in C, it might look like. int size = sizeof(char, MEMORY_POOL_8Bit);// would return 1 int size = sizeof(char, MEMORY_POOL_16Bit);// would return 1 int size = sizeof(int, MEMORY_POOL_8Bit);// would return 2 int size = sizeof(int, MEMORY_POOL_16Bit);// would return 1 int size = sizeofBit(char, MEMORY_POOL_8Bit);// would return 8 int size = sizeofBit(char, MEMORY_POOL_16Bit);// would return 16 (maybe 8?) int size = sizeofBit(int, MEMORY_POOL_8Bit);// would return 16 int size = sizeofBit(int, MEMORY_POOL_16Bit);// would return 16 In C#, it would look like. (Very easy to add this feature into a custom .NET CoreLib) int size = Marshal.SizeOf<byte>(MemoryPoolType.Bank_8Bit);// would return 1 int size = Marshal.SizeOf<byte>(MemoryPoolType.Bank_16Bit);// would return 1 int size = Marshal.SizeOf<int>(MemoryPoolType.Bank_8Bit);// would return 2 int size = Marshal.SizeOf<int>(MemoryPoolType.Bank_16Bit);// would return 1 int size = Marshal.SizeOfBit<byte>(MemoryPoolType.Bank_8Bit);// would return 8 int size = Marshal.SizeOfBit<byte>(MemoryPoolType.Bank_16Bit);// would return 16 (maybe 8?) int size = Marshal.SizeOfBit<int>(MemoryPoolType.Bank_8Bit);// would return 16 int size = Marshal.SizeOfBit<int>(MemoryPoolType.Bank_16Bit);// would return 16 Maybe this was answered BUT if I were to use 1 16bit int as two bytes can the CP1610 instructions support processing the first and second half of a 16bit int as if it were two separate bytes? If so you could save a lot of memory BUT there is no way for pointer arithmetic to work correctly here and I'm sure C frameworks would break (as you have stated @River Patroller). In C# however if you disabled the use of pointers on objects that packed two bytes into one int you wouldn't have an issue here. C# has a struct attribute that lets you define how memory is packed in case you needed to force packing for pointer arithmetic as we could extend it in a custom CoreLib.
  17. Totally, you would never use a GC object if targeting "unexpanded Intellivisions" as you put it. I was thinking in terms of extra memory like the LTO Flash gives me which has plenty of ram for a List<T> object.
  18. So here is another thing to consider. What memory pool should standard CoreLib objects be put in such as List for example? [MemoryPool(MemoryPoolType.Bank_8bit)] struct MyStruct {...} void Main() { var myStructArray = new List<MyStruct>();// The List object is allocated in "scratch" memory BUT its backing ptr points to memory stored in 8 bit memory } The default heap memory pool could be set at compile time. So for LTO Flash, you could set it to use the memory on the cart instead. Also are there any other systems like the "Intellivision" that split up memory like this?
  19. Yes C# structs ALWAYS get put on the stack unless they live in a class / heap object. Classes ALWAYS get put in the heap unless they're static which makes them compile time objects only. We can use static class definitions (static class MyStaticClass) though to store static structs for pre-allocating global static objects in memory... as globally declaring a variable in C would.
  20. Yes but I've moved passed this idea. I think its best if we don't use that model I first articulated. Just going to re-post some code here of the "good" ideas so far (that seem to work). [MemoryPool(MemoryPoolType.Bank_8bit)]// NOTE: all memory lives in this pool class MyGraphicsData// if this was a struct you would get a compiler error { public byte r, g, b; public int x, y, z;// two bytes } class NormalObject// if MemoryPool not set, assume main 16 bit pool { public byte r, g, b;// 16 bit from padding but used as 8 bit public int x, y, z;// 16 bit int } struct SomeStruct {...} void Main() { var graphics = new MyGraphicsData();// heap allocated in 8 bit / graphics memory pool var normalObject = new NormalObject();// heap allocated in 16 bit memory pool var someStruct = new SomeStruct();// stack allocated in 16 bit memory as all stack objects would be (same as doing "auto someStruct = SomeStruct();" in C/C++) } // Stuff like ROM memory access could be done with an attribute like: [MemoryPool(MemoryPoolType.ROM, 0xFFFF)]// addressed by memory location? static class MyReadOnlyData { public readonly static int x, y, z;// only one readonly instance possible (thus static) } // BELOW shows how to define pre allocated objects (statics) unsafe struct MyStruct { public fixed byte myFixedBuff[16];// C style buffer expressed in C# } [MemoryPool(MemoryPoolType.Bank_16bit)] static class MyClass { public static MyStruct myStruct;// would be allocated in the 16bit memory pool (doesn't use GC) } No. Only in "unsafe" code could you get a raw ptr to this stack object as you could in C/C++. "result" living on the stack is thought of the same way you would in C/C++. You can do stuff like this in C# 7.3+ though. struct MyStruct { private int i; ref int Foo() { return ref i;// allows you to return a ref to a struct (same as "return &i;" in C/C++) } }
  21. A stack is just as you describe. In my generic C# code example I tried to explain C# just makes it look likes its a heap allocation when its not if you come from a C++ background. Notice my code example in my post above with line: var result = new MyStruct();// the "new" keyword here when used on a struct isn't actually calling an allocator. A struct will always live on the stack when used in a method scope like this. New in C# is just stating its new memory which can be on the stack (struct) or heap (class). Structs are always allocated on the stack and a Class is always allocated on the heap. Its just a syntax thing but done this way because C# has a much more powerful object initializer than C/C++ "auto result = MyStruct()" in C is the same as "var result = new MyStruct()" in C#. Yes exactly. I would focus on this example below as it solves the issues we're talking about. Look at that example above it came from in my earlier post. var graphics = new MyGraphicsData();// heap allocated in 8 bit / graphics memory pool As I said before IL2X would have a micro GC for heap allocations that only requires extra 1 byte per allocation. This GC design also uses a defrager so the memory aligns much like stack memory would. This approch would be acceptable for heap allocations. In C# if you wanted to statically allocate objects we could do this as well and choose what memory pool it goes in. Example: unsafe struct MyStruct { public fixed byte myFixedBuff[16];// C style buffer expressed in C# } [MemoryPool(MemoryPoolType.Bank_16bit)] static class MyClass { public static MyStruct myStruct;// would be allocated in the 16bit memory pool (doesn't use GC) }
  22. Ok so knowing this I would say all heap allocations default to the "scratch" memory pool with the option to heap allocate in the processor memory pool of course but sounds like that should be left for stack memory at default. With the attribute model in C# the compiler will know what kind of pointer type to use if needed. If wanted you can add a special case field attribute to specify how a 16 bit number is divided in memory but this sounds very rare. Yes on most all platforms it would be BUT .NET is designed so this can change for specific platforms if needed. On Atari 2600 for example I will probably make char 8 bit even though its 16 bit on a desktop to handle the small memory limits better. On Intellivision C#'s byte = 1 and char = 2 fits perfectly. Byte is conceptually still used a 1 byte and sizeof(byte) still returns 1, its just padded in 16 bit memory. No C# allows you to allocate an array of anything. I thought you were talking about a C style ptr could be accessed like an array in C when its not. Which isn't allowed in C# without using "unsafe" code as shown in my example.
  23. Well in C# a char is 2 bytes Take a look at my last post, I revised the idea from stuff you guys have posted. I think it may solve the issues you brought up.
  24. Looking back to this. Is it safe to say that the built in 8 bit memory on the Intelvision is primarily used for GPU / graphics ram? If so did a separate graphics chip read this memory for blanking? Just wondering. What if any memory that was stored in 8 bit memory could only be done via heap allocations? Or any other external memory pool for that matter. Then there is only one 16 bit stack memory pool, which removes the need for any complexity here. So in C# terms this could be enforced by using a class (as its a ptr) and setting its MemoryPool to 8 bit memory as it would enforce that compiler rule. [MemoryPool(MemoryPoolType.Bank_8bit)] class MyGraphicsData// if this was a struct you would get a compiler error { public byte r, g, b; public int x, y, z;// two bytes } class NormalObject// if MemoryPool not set, assume main 16 bit pool { public byte r, g, b;// 16 bit from padding but used as 8 bit public int x, y, z;// 16 bit int } struct SomeStruct {...} void Main() { var graphics = new MyGraphicsData();// heap allocated in 8 bit / graphics memory pool var normalObject = new NormalObject();// heap allocated in 16 bit memory pool var someStruct = new SomeStruct();// stack allocated in 16 bit memory as all stack objects would be } // Stuff like ROM memory access could be done with an attribute like: [MemoryPool(MemoryPoolType.ROM, 0xFFFF)]// addressed by memory location? static class MyReadOnlyData { public readonly static int x, y, z;// only one readonly instance possible (thus static) } Keep in mind heap allocation in IL2X micro GC system would only take one extra byte. The GC also de-fragments memory.
  25. So correct my thinking if I'm out of line here as working with CPUs at this level isn't something I've ever really needed think about much outside GPU shader programs. As you know, my thought process was there would be two stacks simultaneously and that means two stack pointers NOT one you switch around (I was thinking both stacks had to be manually managed for each memory bank). I guess I was also thinking the CPU was more like an AVR chip that doesn't even have registers (like some Arduino devices). So does the CP-1610 CPU pull memory from both 8 and 16 bit memory location and store it in registers, or just put memory from 16 bit ones into registers? Lets try to clear up this attribute idea a little more than. Given this object. [MemoryPool(MemoryPoolType.Bank_8bit)] struct Vec3 {...} void Main() { var v = new Vec3();// this is a stack object but what memory pool do I live in? (I'm marked as living in 8 bit memory) } So what options do we have here for where this object should be stored in memory? 1) If we have two stacks each with a separate stack ptr then this "v" instance could live in 8 bit memory. When the stack unwinds, it unwinds both the 8 and 16 bit stacks. 2) If we only have one stack that lives in 16 bit memory does this "v" instance get created on the 16 bit stack with memory padding? "MemoryPool" in this case is just a compiler hint. 3) other options?
×
×
  • Create New...