Jump to content
insomnia

Linux-like OS for the TI

Recommended Posts

So I've mentioned on here a few times that I've been working on a new operating system for the TI. Development has gotten to the point where I think it's time to do some kind of announcement. Additionally, I've read the suggestion that something like Contiki should be ported one too many times. We can do better, folks.

 

I'm taking inspiration from Linux for several reasons. It's clearly a successful design. If it's similar enough, I might be able to port some code instead of writing everything from scratch. Writing user programs will be easier since there is no shortage of good references out there.

 

Keep in mind, I'm nowhere close to an alpha release, but I did want to make sure I got some kind of input that what I'm working on might be of interest to someone besides me.

 

The design philosophy I've been following is that while TI wrote a bunch of neat code for the time, modern developers should only be limited by the constraints of hardware, not software decisions of the past. So that means backwards-compatibility is out. Developing against raw hardware is in.

 

I'm assuming this as a base platform:

Console

1MB AMS card

Two 96KB floppy drives

RS232 card

 

Here's a current feature list:

Kernel runs from a cart image

Device driver API

Drivers for:

text screen

keyboard

floppy drive

/dev/null

/dev/zero

/dev/random

Posix-compatable filesystem API

CFS filesystem (a simple filesystem similar to ext2)

Preemptive multitasking

Priority scheduling

Mutexes

Semaphores

Shared memory

Syscall API

Timers

 

Conspicuously missing features:

User shell

Use AMS for process memory seperation

loading executables (EA5 and ELF)

Shell scripting

Some kind of catchy name

 

As of today, the kernel size is about 12KB and has about 10,000 lines of code (not including libc or libgcc). The code will eventually be put up on github after i clean it up a bit. I need to impose a coding standard and make sure all dead code has been removed properly.

 

So, what does everyone think of this project? Is this something that sounds interesting or possibly useful? Is there something I seem to have overlooked? Deep-seated opposition to the very concept? Let me know.

 

I'm trying to work on documentation, but I suspect it will always lag the current code state. If there's some aspect of this thing someone would like more information on. I'll be happy to share.

 

Thanks for listening.

  • Like 13

Share this post


Link to post
Share on other sites

Great idea, and much more likely if a 4Meg SAMS is created.

Combined with a F18 MK2 it would be at least similar to a IBM PC in some features.

  • Like 1

Share this post


Link to post
Share on other sites

So I've mentioned on here a few times that I've been working on a new operating system for the TI. Development has gotten to the point where I think it's time to do some kind of announcement. Additionally, I've read the suggestion that something like Contiki should be ported one too many times. We can do better, folks.

 

I'm taking inspiration from Linux for several reasons. It's clearly a successful design. If it's similar enough, I might be able to port some code instead of writing everything from scratch. Writing user programs will be easier since there is no shortage of good references out there.

 

Keep in mind, I'm nowhere close to an alpha release, but I did want to make sure I got some kind of input that what I'm working on might be of interest to someone besides me.

 

The design philosophy I've been following is that while TI wrote a bunch of neat code for the time, modern developers should only be limited by the constraints of hardware, not software decisions of the past. So that means backwards-compatibility is out. Developing against raw hardware is in.

 

I'm assuming this as a base platform:

Console

1MB AMS card

Two 96KB floppy drives

RS232 card

 

Here's a current feature list:

Kernel runs from a cart image

Device driver API

Drivers for:

text screen

keyboard

floppy drive

/dev/null

/dev/zero

/dev/random

Posix-compatable filesystem API

CFS filesystem (a simple filesystem similar to ext2)

Preemptive multitasking

Priority scheduling

Mutexes

Semaphores

Shared memory

Syscall API

Timers

 

Conspicuously missing features:

User shell

Use AMS for process memory seperation

loading executables (EA5 and ELF)

Shell scripting

Some kind of catchy name

 

As of today, the kernel size is about 12KB and has about 10,000 lines of code (not including libc or libgcc). The code will eventually be put up on github after i clean it up a bit. I need to impose a coding standard and make sure all dead code has been removed properly.

 

So, what does everyone think of this project? Is this something that sounds interesting or possibly useful? Is there something I seem to have overlooked? Deep-seated opposition to the very concept? Let me know.

 

I'm trying to work on documentation, but I suspect it will always lag the current code state. If there's some aspect of this thing someone would like more information on. I'll be happy to share.

 

Thanks for listening.

Awesomepost-25598-153578332931_thumb.jpeg

 

Sent from my LG-H872 using Tapatalk

  • Like 1

Share this post


Link to post
Share on other sites

Very cool. Like the idea of combining cart and SAMS. Ive been toying around with the idea of doing something similar.

My idea is to go for the first revision of Minix, as it matches hardware of that time. This is more of a personal project as I like to learn more on OS concepts and dont have any code developed at this time.

 

 

Particular interesting is how you would handle preemption.

For my minix project I was thinking about a small sidecart board that has software controlled HW timers that generate load interrupts. As said only ideas floating around in my mind.

 

Good luck with the project! I am very interested seeing how this further evolves.

Share this post


Link to post
Share on other sites

I don't understand Linux but the ground-up approach sounds inspiring. If it takes off... I hope I'm aboard! I imagine this could be used to increase cross-platform compatibility tasks! :cool:

Share this post


Link to post
Share on other sites

SUPER excited about this! I have been thinking of something like this myself for a long time as well, but happy to work on software for this environment once it's ready for that. Let me know if you think that'd help.

  • Like 2

Share this post


Link to post
Share on other sites

Fantastic project! It's about time the TI gets a proper OS. The hardware requirements could be problematic for those without a PEB, although Eric's Tiny PEB project might plug that hole nicely once completed.

That said, how will the end user program on that OS? Will it eventually include an assembler or be able to run something like a C or Pascal compiler? I assume that Basic and XB program launching will be out.

 

Truly terrific effort. I hope you are able to bring it to fruition.

  • Like 2

Share this post


Link to post
Share on other sites

NIce work by you.

 

The "obvious" catchy name is TINUX, but there are more creative people out there. :-)

  • Like 2

Share this post


Link to post
Share on other sites

I don't understand Linux but the ground-up approach sounds inspiring. If it takes off... I hope I'm aboard! I imagine this could be used to increase cross-platform compatibility tasks! :cool:

The original central Linux core was 64K, but the problem was modules/pages/memory that required a minimum 640K computer.

SAMS is 1Meg so this is a doable idea.

Linux was a extreme cut down version of UNIX (that only would run on Mainframes and Miniframe computers with 200Meg of memory)

thus thus LINUX would fit on a IBM PC.

Linux became a rage as it put a OS other then DOS/Windows/OS8 onto a desktop computer.

Share this post


Link to post
Share on other sites

I guess I was one of those who mentioned Contiki recently. Didn't mean to annoy. If this is posix compatible that should make porting existing applications smoother.

Share this post


Link to post
Share on other sites

First off, I should answr some of these questions.

 

@ retroclouds RE: scheduler preemption

I'm using the VDP interrupt to periodically check the process queue to see if there is a higher-priority process that is ready to run. if so the currently running process is packed up and the new one is started. Additionally, each time a syscall is invoked (basically any IPC or file IO operation), the system pauses the calling process and starts the next ready, highest-priority process. In some pathological cases, it is possible for a process to starve same or lower priority processes, but that's a risk with other operating systems as well. In all cases, higher-priority processes are started when expected.

 

@TheMole RE: additional developers

Once I get the major system design decisions made and get more of a framework put together it would definitely be helpful to bring on additional help. I'm typically used to being a solo developer, but in the interest of time and feature coverage, it's probably a good idea to get additional hands on the project at some point.

 

@TeBF RE: Tinix

I thought about the same name, but unfortunately someone already has a minix-derived system by that name. Oh well.

 

@gemintronic RE: contiki

Don't worry about it. I looked into porting Contiki a while ago, and it just wasn't a good fit for the TI. A LOT of compromises were taken to make that OS work. It's a microkernel with stackless processes. The whole project has the feel of a demo program. Impressive, but highly tuned for a single purpose. We have enough resources to make something with a more familiar feel and a lot more capability.

 

@Vorticon RE: user programming

Good question. For now I expect cross development on a PC to be the only way to go. I would eventually like to get self-hosted development working (C, assembly, pascal, basic, etc), but that's a ways off.

 

So last night I looked at the AMS card, and started thinking about how best to use that resource. Specifically, how do we maximize memory for user processes? Even though we have up to 1MB of RAM available, we don't have a Memory Management unit (MMU) or facilities in the processor to detect reads or writes to unmapped space. We would need to add wrapper code to every read or write operation to guarantee that the intended memory is available for use before the operation takes place. This looked even worse as I tried to figure out a way to implement that.

 

I looked at the source for uClinux, which is a variant of linux which suppports processors without MMUs. They resolve the problem by not allowing virtual memory or similar systems like what I was originally looking at. This wasn't too suprising, since I wasn't very far from making the same decision.

 

So here's the memory model for user processes:

 

0x2000 - 0x3fff : Application code, 8KB paged

0xa000 - 0xfdff : Application memory, 23.5 KB

0xfe00 - 0xffff : Process stack, 512 bytes

 

If more memory is needed for data, we can use a floppy or a temp file residing in AMS space.

 

This should be OK. I've already got infrastructure set up for handling paged kernel code. This should be extendable for user code. In my experience, there's usually about a 80/20 split between code and data. So for 24 KB data, we would typically have a 120 KB executable file, and with a total of 1 MB RAM available, we can simultaneously run about 8 of these executables. Alternatively, we should be able to run up to to 128 lightweight processes (4 KB code, 4 KB data). Not too shabby.

 

So the next step for me is to get AMS mapping working, and to find a good way to do block management to allow kernel and applications to share memory resources. Temp file management will probably come after that. Finally, I think application loaders will be good to work on. At that point, an alpha release doesn't seem so far away.

 

  • Like 4

Share this post


Link to post
Share on other sites

First off, I should answr some of these questions.

 

@ retroclouds RE: scheduler preemption

I'm using the VDP interrupt to periodically check the process queue to see if there is a higher-priority process that is ready to run. if so the currently running process is packed up and the new one is started. Additionally, each time a syscall is invoked (basically any IPC or file IO operation), the system pauses the calling process and starts the next ready, highest-priority process. In some pathological cases, it is possible for a process to starve same or lower priority processes, but that's a risk with other operating systems as well. In all cases, higher-priority processes are started when expected.

 

@TheMole RE: additional developers

Once I get the major system design decisions made and get more of a framework put together it would definitely be helpful to bring on additional help. I'm typically used to being a solo developer, but in the interest of time and feature coverage, it's probably a good idea to get additional hands on the project at some point.

 

@TeBF RE: Tinix

I thought about the same name, but unfortunately someone already has a minix-derived system by that name. Oh well.

 

@gemintronic RE: contiki

Don't worry about it. I looked into porting Contiki a while ago, and it just wasn't a good fit for the TI. A LOT of compromises were taken to make that OS work. It's a microkernel with stackless processes. The whole project has the feel of a demo program. Impressive, but highly tuned for a single purpose. We have enough resources to make something with a more familiar feel and a lot more capability.

 

@Vorticon RE: user programming

Good question. For now I expect cross development on a PC to be the only way to go. I would eventually like to get self-hosted development working (C, assembly, pascal, basic, etc), but that's a ways off.

 

So last night I looked at the AMS card, and started thinking about how best to use that resource. Specifically, how do we maximize memory for user processes? Even though we have up to 1MB of RAM available, we don't have a Memory Management unit (MMU) or facilities in the processor to detect reads or writes to unmapped space. We would need to add wrapper code to every read or write operation to guarantee that the intended memory is available for use before the operation takes place. This looked even worse as I tried to figure out a way to implement that.

 

I looked at the source for uClinux, which is a variant of linux which suppports processors without MMUs. They resolve the problem by not allowing virtual memory or similar systems like what I was originally looking at. This wasn't too suprising, since I wasn't very far from making the same decision.

 

So here's the memory model for user processes:

 

0x2000 - 0x3fff : Application code, 8KB paged

0xa000 - 0xfdff : Application memory, 23.5 KB

0xfe00 - 0xffff : Process stack, 512 bytes

 

If more memory is needed for data, we can use a floppy or a temp file residing in AMS space.

 

This should be OK. I've already got infrastructure set up for handling paged kernel code. This should be extendable for user code. In my experience, there's usually about a 80/20 split between code and data. So for 24 KB data, we would typically have a 120 KB executable file, and with a total of 1 MB RAM available, we can simultaneously run about 8 of these executables. Alternatively, we should be able to run up to to 128 lightweight processes (4 KB code, 4 KB data). Not too shabby.

 

So the next step for me is to get AMS mapping working, and to find a good way to do block management to allow kernel and applications to share memory resources. Temp file management will probably come after that. Finally, I think application loaders will be good to work on. At that point, an alpha release doesn't seem so far away.

 

Hmm RAG AMS Assembler should do the trick for you, it has the memory manager already done in Assembly.

Share this post


Link to post
Share on other sites

@TeBF RE: Tinix

I thought about the same name, but unfortunately someone already has a minix-derived system by that name. Oh well.

 

99ix?

 

Or steal naming inspiration from a DIFFERENT child of Unix and call it Plan99. Perhaps a little misleading, but I'm sure people will figure it out.

Share this post


Link to post
Share on other sites

After reviewing your initial post I imagined a year or more until a rollout, if at all... now I feel... I should not underestimate your resolve! Though I can't even attempt to wrap my mind around the specifics of LINUX right now... I believe that I recognize your brainstorming through the "foundationals" technique. Sounds to me like you are quite familiar with this subject matter. Looking forward with high expectations. ;-)

Share this post


Link to post
Share on other sites

0xfe00 - 0xffff : Process stack, 512 bytes

 

I suggest to skip addresses FFFA-FFFF (9995 decrementer, LOAD/NMI vector). But these are details.

  • Like 1

Share this post


Link to post
Share on other sites

So last night I looked at the AMS card, and started thinking about how best to use that resource. Specifically, how do we maximize memory for user processes? Even though we have up to 1MB of RAM available, we don't have a Memory Management unit (MMU) or facilities in the processor to detect reads or writes to unmapped space. We would need to add wrapper code to every read or write operation to guarantee that the intended memory is available for use before the operation takes place. This looked even worse as I tried to figure out a way to implement that.

 

Since all processes have to behave nicely anyway, .e.g they must enable interrupts from time to time and they cannot trash the ISR, why can't they also be required behave nicely when it comes to memory use? I don't think much new will come out of this without being able to allocate more than 32K for each process.

Share this post


Link to post
Share on other sites

 

Since all processes have to behave nicely anyway, .e.g they must enable interrupts from time to time and they cannot trash the ISR, why can't they also be required behave nicely when it comes to memory use? I don't think much new will come out of this without being able to allocate more than 32K for each process.

 

Not sure that's what Insomnia's implying. I think the point is that having wrapper code around every read or write operation automatically provided/enforced by the OS, compiler and libc would create a performance penalty beyond what would be acceptable. The compromise that is being proposed, I think, is that data (heap and bss, in C terminology) is always in the same place, in the same page for a given process (although multiple processes can have data in multiple pages); however, code can span as many 8k segments as one wants. So, in principle each process can allocate as much memory as it likes... the limitation is only for variables/data.

Share this post


Link to post
Share on other sites

Having said that, I do think it would be nice for the OS to provide facilities for paging of data segments as well. It would, as Rasmus points out, be up to the programmer to use that feature carefully if he/she decides to go that route. You could just provide syscalls that request additional pages in the 0xa000-0xc000 area so the OS knows to stay away from these for other processes and have the programmer do the rest of the paging manually/explicitly.

 

It would probably also be nice if paging of code segments could be automated for C programs?

Edited by TheMole
  • Like 1

Share this post


Link to post
Share on other sites

Having said that, I do think it would be nice for the OS to provide facilities for paging of data segments as well. It would, as Rasmus points out, be up to the programmer to use that feature carefully if he/she decides to go that route. You could just provide syscalls that request additional pages in the 0xa000-0xc000 area so the OS knows to stay away from these for other processes and have the programmer do the rest of the paging manually/explicitly.

 

It would probably also be nice if paging of code segments could be automated for C programs?

RAG AMS (SAMS) Assembler has that built in and has a C compiler too...so most of that work is done already.

Share this post


Link to post
Share on other sites

 

Not sure that's what Insomnia's implying. I think the point is that having wrapper code around every read or write operation automatically provided/enforced by the OS, compiler and libc would create a performance penalty beyond what would be acceptable. The compromise that is being proposed, I think, is that data (heap and bss, in C terminology) is always in the same place, in the same page for a given process (although multiple processes can have data in multiple pages); however, code can span as many 8k segments as one wants. So, in principle each process can allocate as much memory as it likes... the limitation is only for variables/data.

 

I'm not sure what I'm suggesting or how it would work. It just appears to me that if all software is written from scratch in C using common library routines to allocate and deallocate memory, and if all software is expected to follow certain rules for good behavior, there might be less need to check all reads and writes at runtime. A 80/20 code/data ratio seems to depend a lot on the use case. For instance, for a new text editor it would probably be more like 20/80. It would be nice from C to be able to allocate more memory for data, not as raw pages, but as something like arrays with far pointers.

Share this post


Link to post
Share on other sites

Hi,

 

This is a really great project !

 

Have you eard about Fuzix, the Unix like project for 8bit computers?

http://www.fuzix.org/

 

It's a project from Alan Cox, one of the Linux "GranMaster", and it's already running on some pretty old computers like TRS 1, Coco2, Coco3 or Amstrad NC200.

There's even a page on their Github Wiki about the Geneve to be a perfect candidate for Fuzix...

https://github.com/EtchedPixels/FUZIX/wiki/Platform-Review-Notes

One point I like wiht this porject is that it's built to be the same code for every platform (where it's possible), which means that application could be recompiled for each arhtitecture.

 

As it's intented from the begining to run on very limited computers, it probably would be a more intersting source than Minix or Linux.

Share this post


Link to post
Share on other sites

RAG AMS (SAMS) Assembler has that built in and has a C compiler too...so most of that work is done already.

I'm sure some stuff can be learned from those implementations, but the point is to make a POSIX-like environment, built off of the gcc toolchain, so swapping out assemblers and compilers is entirely is out. Also, the initial development environment will be cross-platform, which makes the use of RAG tools impossible I'd say.

 

I'd never heard of Fuzix before, that's really interesting. It seems that they've considered (and support) a wide array of memory management options: https://github.com/EtchedPixels/FUZIX/wiki/Memory-Management. Should be a good source of inspiration!

Share this post


Link to post
Share on other sites

I'm sure some stuff can be learned from those implementations, but the point is to make a POSIX-like environment, built off of the gcc toolchain, so swapping out assemblers and compilers is entirely is out. Also, the initial development environment will be cross-platform, which makes the use of RAG tools impossible I'd say.

 

I'd never heard of Fuzix before, that's really interesting. It seems that they've considered (and support) a wide array of memory management options: https://github.com/EtchedPixels/FUZIX/wiki/Memory-Management. Should be a good source of inspiration!

Ok, how much data on SAMS do they provide for C memory management.

RAG C AMS (SAMS) compiler shows the C code for memory management using the SAMS, just about EXACTLY what is needed.

 

Unless you think writing it from scratch with no data on how to do it will be faster?

 

Personally looking at how to do something from someone else that did all research would make more sense then from scratch and no data.

Edited by RXB

Share this post


Link to post
Share on other sites

Personally looking at how to do something from someone else that did all research would make more sense then from scratch and no data.

Well... yes, I agree... see below :)

 

I'm sure some stuff can be learned from those implementations

  • Like 1

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...

  • Recently Browsing   0 members

    No registered users viewing this page.

×
×
  • Create New...