Jump to content

Photo

Linux-like OS for the TI


36 replies to this topic

#1 insomnia OFFLINE  

insomnia

    Star Raider

  • 90 posts
  • Location:Pittsburgh, PA

Posted Fri Aug 31, 2018 11:23 PM

So I've mentioned on here a few times that I've been working on a new operating system for the TI. Development has gotten to the point where I think it's time to do some kind of announcement. Additionally, I've read the suggestion that something like Contiki should be ported one too many times. We can do better, folks.

 

I'm taking inspiration from Linux for several reasons. It's clearly a successful design. If it's similar enough, I might be able to port some code instead of writing everything from scratch. Writing user programs will be easier since there is no shortage of good references out there.

 

Keep in mind, I'm nowhere close to an alpha release, but I did want to make sure I got some kind of input that what I'm working on might be of interest to someone besides me.

 

The design philosophy I've been following is that while TI wrote a bunch of neat code for the time, modern developers should only be limited by the constraints of hardware, not software decisions of the past. So that means backwards-compatibility is out. Developing against raw hardware is in.

 

I'm assuming this as a base platform:

  Console

  1MB AMS card

  Two 96KB floppy drives

  RS232 card

 

Here's a current feature list:

  Kernel runs from a cart image

  Device driver API

  Drivers for:

    text screen

    keyboard

    floppy drive

    /dev/null

    /dev/zero

    /dev/random

  Posix-compatable filesystem API

  CFS filesystem (a simple filesystem similar to ext2)

  Preemptive multitasking

  Priority scheduling

  Mutexes

  Semaphores

  Shared memory

  Syscall API

  Timers

 

Conspicuously missing features:

  User shell

  Use AMS for process memory seperation

  loading executables (EA5 and ELF)

  Shell scripting

  Some kind of catchy name

 

As of today, the kernel size is about 12KB and has about 10,000 lines of code (not including libc or libgcc). The code will eventually be put up on github after i clean it up a bit. I need to impose a coding standard and make sure all dead code has been removed properly.

 

So, what does everyone think of this project? Is this something that sounds interesting or possibly useful? Is there something I seem to have overlooked? Deep-seated opposition to the very concept? Let me know.

 

I'm trying to work on documentation, but I suspect it will always lag the current code state. If there's some aspect of this thing someone would like more information on. I'll be happy to share.

 

Thanks for listening.



#2 RXB OFFLINE  

RXB

    River Patroller

  • 3,370 posts
  • Location:Vancouver, Washington, USA

Posted Fri Aug 31, 2018 11:37 PM

Great idea, and much more likely if a 4Meg SAMS is created.

Combined with a F18 MK2 it would be at least similar to a IBM PC in some features.



#3 arcadeshopper OFFLINE  

arcadeshopper

    River Patroller

  • 3,850 posts
  • Location:Portland, Oregon USA

Posted Sat Sep 1, 2018 12:28 AM

So I've mentioned on here a few times that I've been working on a new operating system for the TI. Development has gotten to the point where I think it's time to do some kind of announcement. Additionally, I've read the suggestion that something like Contiki should be ported one too many times. We can do better, folks.
 
I'm taking inspiration from Linux for several reasons. It's clearly a successful design. If it's similar enough, I might be able to port some code instead of writing everything from scratch. Writing user programs will be easier since there is no shortage of good references out there.
 
Keep in mind, I'm nowhere close to an alpha release, but I did want to make sure I got some kind of input that what I'm working on might be of interest to someone besides me.
 
The design philosophy I've been following is that while TI wrote a bunch of neat code for the time, modern developers should only be limited by the constraints of hardware, not software decisions of the past. So that means backwards-compatibility is out. Developing against raw hardware is in.
 
I'm assuming this as a base platform:
  Console
  1MB AMS card
  Two 96KB floppy drives
  RS232 card
 
Here's a current feature list:
  Kernel runs from a cart image
  Device driver API
  Drivers for:
    text screen
    keyboard
    floppy drive
    /dev/null
    /dev/zero
    /dev/random
  Posix-compatable filesystem API
  CFS filesystem (a simple filesystem similar to ext2)
  Preemptive multitasking
  Priority scheduling
  Mutexes
  Semaphores
  Shared memory
  Syscall API
  Timers
 
Conspicuously missing features:
  User shell
  Use AMS for process memory seperation
  loading executables (EA5 and ELF)
  Shell scripting
  Some kind of catchy name
 
As of today, the kernel size is about 12KB and has about 10,000 lines of code (not including libc or libgcc). The code will eventually be put up on github after i clean it up a bit. I need to impose a coding standard and make sure all dead code has been removed properly.
 
So, what does everyone think of this project? Is this something that sounds interesting or possibly useful? Is there something I seem to have overlooked? Deep-seated opposition to the very concept? Let me know.
 
I'm trying to work on documentation, but I suspect it will always lag the current code state. If there's some aspect of this thing someone would like more information on. I'll be happy to share.
 
Thanks for listening.

AwesomeAttached File  large.jpeg   19.16KB   3 downloads

Sent from my LG-H872 using Tapatalk
  • RXB likes this

#4 retroclouds OFFLINE  

retroclouds

    Stargunner

  • 1,654 posts
  • Location:Germany

Posted Sat Sep 1, 2018 12:53 AM

Very cool. Like the idea of combining cart and SAMS. Ive been toying around with the idea of doing something similar.
My idea is to go for the first revision of Minix, as it matches hardware of that time. This is more of a personal project as I like to learn more on OS concepts and dont have any code developed at this time.


Particular interesting is how you would handle preemption.
For my minix project I was thinking about a small sidecart board that has software controlled HW timers that generate load interrupts. As said only ideas floating around in my mind.

Good luck with the project! I am very interested seeing how this further evolves.

#5 HOME AUTOMATION OFFLINE  

HOME AUTOMATION

    Chopper Commander

  • 147 posts

Posted Sat Sep 1, 2018 1:26 AM

I don't understand Linux but the ground-up approach sounds inspiring. If it takes off... I hope I'm aboard! I imagine this could be used to increase cross-platform compatibility tasks! :cool:



#6 TheMole OFFLINE  

TheMole

    Dragonstomper

  • 806 posts
  • Location:Belgium

Posted Sat Sep 1, 2018 4:24 AM

SUPER excited about this! I have been thinking of something like this myself for a long time as well, but happy to work on software for this environment once it's ready for that. Let me know if you think that'd help.



#7 Vorticon OFFLINE  

Vorticon

    River Patroller

  • 3,395 posts
  • Location:Eagan, MN, USA

Posted Sat Sep 1, 2018 5:15 AM

Fantastic project! It's about time the TI gets a proper OS. The hardware requirements could be problematic for those without a PEB, although Eric's Tiny PEB project might plug that hole nicely once completed.

That said, how will the end user program on that OS? Will it eventually include an assembler or be able to run something like a C or Pascal compiler? I assume that Basic and XB program launching will be out.

 

Truly terrific effort. I hope you are able to bring it to fruition.



#8 TheBF OFFLINE  

TheBF

    Dragonstomper

  • 792 posts
  • Location:The Great White North

Posted Sat Sep 1, 2018 8:12 AM

NIce work by you. 

 

The "obvious" catchy name is TINUX, but there are more creative people out there. :-)



#9 Shift838 OFFLINE  

Shift838

    River Patroller

  • 2,436 posts
  • SHIFT838
  • Location:Deer Park, Texas

Posted Sat Sep 1, 2018 4:59 PM

This would be great to see completed one day.  I love Linux type systems!



#10 RXB OFFLINE  

RXB

    River Patroller

  • 3,370 posts
  • Location:Vancouver, Washington, USA

Posted Sat Sep 1, 2018 5:22 PM

I don't understand Linux but the ground-up approach sounds inspiring. If it takes off... I hope I'm aboard! I imagine this could be used to increase cross-platform compatibility tasks! :cool:

The original central Linux core was 64K, but the problem was modules/pages/memory that required a minimum 640K computer.

SAMS is 1Meg so this is a doable idea.

Linux was a extreme cut down version of UNIX (that only would run on Mainframes and Miniframe computers with 200Meg of memory)

thus thus LINUX would fit on a IBM PC. 

Linux became a rage as it put a OS other then DOS/Windows/OS8 onto a desktop computer.



#11 Gemintronic OFFLINE  

Gemintronic

    Jason S. - Lead Developer & CEO

  • 9,221 posts

Posted Sat Sep 1, 2018 5:34 PM

I guess I was one of those who mentioned Contiki recently.  Didn't mean to annoy.   If this is posix compatible that should make porting existing applications smoother.



#12 insomnia OFFLINE  

insomnia

    Star Raider

  • Topic Starter
  • 90 posts
  • Location:Pittsburgh, PA

Posted Sun Sep 2, 2018 1:35 AM

First off, I should answr some of these questions.

 

@ retroclouds RE: scheduler preemption

I'm using the VDP interrupt to periodically check the process queue to see if there is a higher-priority process that is ready to run. if so the currently running process is packed up and the new one is started. Additionally, each time a syscall is invoked (basically any IPC or file IO operation), the system pauses the calling process and starts the next ready, highest-priority process. In some pathological cases, it is possible for a process to starve same or lower priority processes, but that's a risk with other operating systems as well. In all cases, higher-priority processes are started when expected.

 

@TheMole RE: additional developers

Once I get the major system design decisions made and get more of a framework put together it would definitely be helpful to bring on additional help. I'm  typically used to being a solo developer, but in the interest of time and feature coverage, it's probably a good idea to get additional hands on the project at some point.

 

@TeBF RE: Tinix

I thought about the same name, but unfortunately someone already has a minix-derived system by that name. Oh well.

 

@gemintronic RE: contiki

Don't worry about it. I looked into porting Contiki a while ago, and it just wasn't a good fit for the TI. A LOT of compromises were taken to make that OS work. It's a microkernel with stackless processes. The whole project has the feel of a demo program. Impressive, but highly tuned for a single purpose. We have enough resources to make something with a more familiar feel and a lot more capability.

 

@Vorticon RE: user programming

Good question. For now I expect cross development on a PC to be the only way to go. I would eventually like to get self-hosted development working (C, assembly, pascal, basic, etc), but that's a ways off.

 

So last night I looked at the AMS card, and started thinking about how best to use that resource. Specifically, how do we maximize memory for user processes? Even though we have up to 1MB of RAM available, we don't have a Memory Management unit (MMU) or facilities in the processor to detect reads or writes to unmapped space. We would need to add wrapper code to every read or write operation to guarantee that the intended memory is available for use before the operation takes place. This looked even worse as I tried to figure out a way to implement that.

 

I looked at the source for uClinux, which is a variant of linux which suppports processors without MMUs. They resolve the problem by not allowing virtual memory or similar systems like what I was originally looking at. This wasn't too suprising, since I wasn't very far from making the same decision.

 

So here's the memory model for user processes:

 

0x2000 - 0x3fff : Application code, 8KB paged

0xa000 - 0xfdff : Application memory, 23.5 KB

0xfe00 - 0xffff : Process stack, 512 bytes

 

If more memory is needed for data, we can use a floppy or a temp file residing in AMS space.

 

This should be OK. I've already got infrastructure set up for handling paged kernel code. This should be extendable for user code. In my experience, there's usually about a 80/20 split between code and data. So for 24 KB data, we would typically have a 120 KB executable file, and with a total of 1 MB RAM available, we can simultaneously run about 8 of these executables. Alternatively, we should be able to run up to to 128 lightweight processes (4 KB code, 4 KB data). Not too shabby.

 

So the next step for me is to get AMS mapping working, and to find a good way to do block management to allow kernel and applications to share memory resources. Temp file management will probably come after that. Finally, I think application loaders will be good to work on. At that point, an alpha release doesn't seem so far away.

 



#13 RXB OFFLINE  

RXB

    River Patroller

  • 3,370 posts
  • Location:Vancouver, Washington, USA

Posted Sun Sep 2, 2018 2:44 AM

First off, I should answr some of these questions.

 

@ retroclouds RE: scheduler preemption

I'm using the VDP interrupt to periodically check the process queue to see if there is a higher-priority process that is ready to run. if so the currently running process is packed up and the new one is started. Additionally, each time a syscall is invoked (basically any IPC or file IO operation), the system pauses the calling process and starts the next ready, highest-priority process. In some pathological cases, it is possible for a process to starve same or lower priority processes, but that's a risk with other operating systems as well. In all cases, higher-priority processes are started when expected.

 

@TheMole RE: additional developers

Once I get the major system design decisions made and get more of a framework put together it would definitely be helpful to bring on additional help. I'm  typically used to being a solo developer, but in the interest of time and feature coverage, it's probably a good idea to get additional hands on the project at some point.

 

@TeBF RE: Tinix

I thought about the same name, but unfortunately someone already has a minix-derived system by that name. Oh well.

 

@gemintronic RE: contiki

Don't worry about it. I looked into porting Contiki a while ago, and it just wasn't a good fit for the TI. A LOT of compromises were taken to make that OS work. It's a microkernel with stackless processes. The whole project has the feel of a demo program. Impressive, but highly tuned for a single purpose. We have enough resources to make something with a more familiar feel and a lot more capability.

 

@Vorticon RE: user programming

Good question. For now I expect cross development on a PC to be the only way to go. I would eventually like to get self-hosted development working (C, assembly, pascal, basic, etc), but that's a ways off.

 

So last night I looked at the AMS card, and started thinking about how best to use that resource. Specifically, how do we maximize memory for user processes? Even though we have up to 1MB of RAM available, we don't have a Memory Management unit (MMU) or facilities in the processor to detect reads or writes to unmapped space. We would need to add wrapper code to every read or write operation to guarantee that the intended memory is available for use before the operation takes place. This looked even worse as I tried to figure out a way to implement that.

 

I looked at the source for uClinux, which is a variant of linux which suppports processors without MMUs. They resolve the problem by not allowing virtual memory or similar systems like what I was originally looking at. This wasn't too suprising, since I wasn't very far from making the same decision.

 

So here's the memory model for user processes:

 

0x2000 - 0x3fff : Application code, 8KB paged

0xa000 - 0xfdff : Application memory, 23.5 KB

0xfe00 - 0xffff : Process stack, 512 bytes

 

If more memory is needed for data, we can use a floppy or a temp file residing in AMS space.

 

This should be OK. I've already got infrastructure set up for handling paged kernel code. This should be extendable for user code. In my experience, there's usually about a 80/20 split between code and data. So for 24 KB data, we would typically have a 120 KB executable file, and with a total of 1 MB RAM available, we can simultaneously run about 8 of these executables. Alternatively, we should be able to run up to to 128 lightweight processes (4 KB code, 4 KB data). Not too shabby.

 

So the next step for me is to get AMS mapping working, and to find a good way to do block management to allow kernel and applications to share memory resources. Temp file management will probably come after that. Finally, I think application loaders will be good to work on. At that point, an alpha release doesn't seem so far away.

 

Hmm RAG AMS Assembler should do the trick for you, it has the memory manager already done in Assembly.



#14 JB ONLINE  

JB

    Quadrunner

  • 9,244 posts
  • With Stereo-Of-The-Art-Sound

Posted Sun Sep 2, 2018 4:34 AM

@TeBF RE: Tinix

I thought about the same name, but unfortunately someone already has a minix-derived system by that name. Oh well.

 

99ix?

 

Or steal naming inspiration from a DIFFERENT child of Unix and call it Plan99. Perhaps a little misleading, but I'm sure people will figure it out.



#15 HOME AUTOMATION OFFLINE  

HOME AUTOMATION

    Chopper Commander

  • 147 posts

Posted Sun Sep 2, 2018 4:37 AM

After reviewing your initial post I imagined a year or more until a rollout, if at all... now I feel... I should not underestimate your resolve! Though I can't even attempt to wrap my mind around the specifics of LINUX right now... I believe that I recognize your brainstorming through the "foundationals" technique. Sounds to me like you are quite familiar with this subject matter. Looking forward with high expectations. ;-)



#16 mizapf OFFLINE  

mizapf

    River Patroller

  • 3,401 posts
  • Location:Germany

Posted Sun Sep 2, 2018 5:28 AM

0xfe00 - 0xffff : Process stack, 512 bytes

 

I suggest to skip addresses FFFA-FFFF (9995 decrementer, LOAD/NMI vector). But these are details.



#17 Asmusr ONLINE  

Asmusr

    River Patroller

  • 2,909 posts
  • Location:Denmark

Posted Sun Sep 2, 2018 12:10 PM

So last night I looked at the AMS card, and started thinking about how best to use that resource. Specifically, how do we maximize memory for user processes? Even though we have up to 1MB of RAM available, we don't have a Memory Management unit (MMU) or facilities in the processor to detect reads or writes to unmapped space. We would need to add wrapper code to every read or write operation to guarantee that the intended memory is available for use before the operation takes place. This looked even worse as I tried to figure out a way to implement that.

 

Since all processes have to behave nicely anyway, .e.g they must enable interrupts from time to time and they cannot trash the ISR, why can't they also be required behave nicely when it comes to memory use? I don't think much new will come out of this without being able to allocate more than 32K for each process. 



#18 TheMole OFFLINE  

TheMole

    Dragonstomper

  • 806 posts
  • Location:Belgium

Posted Mon Sep 3, 2018 2:25 AM



 

Since all processes have to behave nicely anyway, .e.g they must enable interrupts from time to time and they cannot trash the ISR, why can't they also be required behave nicely when it comes to memory use? I don't think much new will come out of this without being able to allocate more than 32K for each process. 

 

Not sure that's what Insomnia's implying. I think the point is that having wrapper code around every read or write operation automatically provided/enforced by the OS, compiler and libc would create a performance penalty beyond what would be acceptable. The compromise that is being proposed, I think, is that data (heap and bss, in C terminology) is always in the same place, in the same page for a given process (although multiple processes can have data in multiple pages); however, code can span as many 8k segments as one wants. So, in principle each process can allocate as much memory as it likes... the limitation is only for variables/data.



#19 TheMole OFFLINE  

TheMole

    Dragonstomper

  • 806 posts
  • Location:Belgium

Posted Mon Sep 3, 2018 2:32 AM

Having said that, I do think it would be nice for the OS to provide facilities for paging of data segments as well. It would, as Rasmus points out, be up to the programmer to use that feature carefully if he/she decides to go that route. You could just provide syscalls that request additional pages in the 0xa000-0xc000 area so the OS knows to stay away from these for other processes and have the programmer do the rest of the paging manually/explicitly.

 

It would probably also be nice if paging of code segments could be automated for C programs?


Edited by TheMole, Mon Sep 3, 2018 2:35 AM.

  • RXB likes this

#20 RXB OFFLINE  

RXB

    River Patroller

  • 3,370 posts
  • Location:Vancouver, Washington, USA

Posted Mon Sep 3, 2018 4:26 AM

Having said that, I do think it would be nice for the OS to provide facilities for paging of data segments as well. It would, as Rasmus points out, be up to the programmer to use that feature carefully if he/she decides to go that route. You could just provide syscalls that request additional pages in the 0xa000-0xc000 area so the OS knows to stay away from these for other processes and have the programmer do the rest of the paging manually/explicitly.

 

It would probably also be nice if paging of code segments could be automated for C programs?

RAG AMS  (SAMS) Assembler has that built in and has a C compiler too...so most of that work is done already.



#21 Asmusr ONLINE  

Asmusr

    River Patroller

  • 2,909 posts
  • Location:Denmark

Posted Mon Sep 3, 2018 6:30 AM

 

Not sure that's what Insomnia's implying. I think the point is that having wrapper code around every read or write operation automatically provided/enforced by the OS, compiler and libc would create a performance penalty beyond what would be acceptable. The compromise that is being proposed, I think, is that data (heap and bss, in C terminology) is always in the same place, in the same page for a given process (although multiple processes can have data in multiple pages); however, code can span as many 8k segments as one wants. So, in principle each process can allocate as much memory as it likes... the limitation is only for variables/data.

 

I'm not sure what I'm suggesting or how it would work. It just appears to me that if all software is written from scratch in C using common library routines to allocate and deallocate memory, and if all software is expected to follow certain rules for good behavior, there might be less need to check all reads and writes at runtime. A 80/20 code/data ratio seems to depend a lot on the use case. For instance, for a new text editor it would probably be more like 20/80. It would be nice from C to be able to allocate more memory for data, not as raw pages, but as something like arrays with far pointers.



#22 Lord Vorkosigan OFFLINE  

Lord Vorkosigan

    Combat Commando

  • 3 posts
  • Location:France

Posted Mon Sep 3, 2018 6:58 AM

Hi,

 

This is a really great project !

 

Have you eard about Fuzix, the Unix like project for 8bit computers?

http://www.fuzix.org/

 

It's a project from Alan Cox, one of the Linux "GranMaster", and it's already running on some pretty old computers like TRS 1, Coco2, Coco3 or Amstrad NC200.

There's even a page on their Github Wiki about the Geneve to be a perfect candidate for Fuzix...

https://github.com/E...rm-Review-Notes

One point I like wiht this porject is that it's built to be the same code for every platform (where it's possible), which means that application could be recompiled for each arhtitecture.

 

As it's intented from the begining to run on very limited computers, it probably would be a more intersting source than Minix or Linux.



#23 TheMole OFFLINE  

TheMole

    Dragonstomper

  • 806 posts
  • Location:Belgium

Posted Mon Sep 3, 2018 7:22 AM

RAG AMS  (SAMS) Assembler has that built in and has a C compiler too...so most of that work is done already.

I'm sure some stuff can be learned from those implementations, but the point is to make a POSIX-like environment, built off of the gcc toolchain, so swapping out assemblers and compilers is entirely is out. Also, the initial development environment will be cross-platform, which makes the use of RAG tools impossible I'd say.

 

I'd never heard of Fuzix before, that's really interesting. It seems that they've considered (and support) a wide array of memory management options: https://github.com/E...mory-Management. Should be a good source of inspiration!



#24 RXB OFFLINE  

RXB

    River Patroller

  • 3,370 posts
  • Location:Vancouver, Washington, USA

Posted Mon Sep 3, 2018 3:18 PM

I'm sure some stuff can be learned from those implementations, but the point is to make a POSIX-like environment, built off of the gcc toolchain, so swapping out assemblers and compilers is entirely is out. Also, the initial development environment will be cross-platform, which makes the use of RAG tools impossible I'd say.

 

I'd never heard of Fuzix before, that's really interesting. It seems that they've considered (and support) a wide array of memory management options: https://github.com/E...mory-Management. Should be a good source of inspiration!

Ok, how much data on SAMS do they provide for C memory management.

RAG C AMS (SAMS) compiler shows the C code for memory management using the SAMS, just about EXACTLY what is needed.

 

Unless you think writing it from scratch with no data on how to do it will be faster?

 

Personally looking at how to do something from someone else that did all research would make more sense then from scratch and no data.


Edited by RXB, Mon Sep 3, 2018 3:21 PM.


#25 TheMole OFFLINE  

TheMole

    Dragonstomper

  • 806 posts
  • Location:Belgium

Posted Tue Sep 4, 2018 12:43 AM

Personally looking at how to do something from someone else that did all research would make more sense then from scratch and no data.


Well... yes, I agree... see below :)
 

I'm sure some stuff can be learned from those implementations


  • RXB likes this




0 user(s) are browsing this forum

0 members, 0 guests, 0 anonymous users