UCSD Pascal attempted to create a common development/target environment.
The biggest problem with UCSD Pascal is that it used a virtual machine.
That in itself isn't a horrible idea but the virtual CPU is stack based and most processors did not include stack relative instructions, plus the compiler wasn't a modern optimizing design.
I think Apple Pascal runs about 30% faster than Applesoft II BASIC which is about what you get from Applesoft BASIC compilers.
Apple added support for native code segments to add speed and to allow native hardware support but that isn't portable and a better design from the start would have made it faster without having to write native code.
The virtual machine is the biggest advantage of UCSD p-system, not the biggest problem.
The p-system is all about portability. That's the whole purpose of the system. And since it's mainly implementing Pascal, and is written in Pascal itself, the stack machine is obvious. The whole idea about a language like Pascal is its scope of variables and recursive capability. That lends itself well to a stack-based machine. A memory-to-memory architecture like the TMS 9900 can also operate on values on the stack directly, as well as address anything offset a certain amount from the top of stack.
The native code converter is as portable as anything else. You write your program in Pascal, then do the native conversion on the target machine, if you feel you need to. You don't have to, but if you then want to run the program on a different target computer, you simply move the Pascal code to that machine (remember, you don't have to re-compile) and apply the native code conversion on the new target, with the appropriate NCG software. They were available for all processors, but unfortunately TI didn't release any for the TI 99/4A.
Yes, of course, an emulated machine, like the PME, is slower than native code. Even when we let the architecture of the machine stay the same, I've shown to you that at the extreme, there are about eight instructions executed to do the one that's really needed. But more complex things, like procedure calls, have far less overhead, since they are more complex in nature. P-codes like CXG aren't converted by the NCG either.
But the TI 99/4A p-system does support native code generation. It just lacks the program for the automatic conversion. But once converted, the program works. I've tried, since I out of curiosity converted some small routine by hand, and patched the code file to make it look like what a NCG would create. It runs just as it should. I've played with the idea of making a NCG for the TI, but have written my own assembly routines instead, when needed.
To understand why the decision to go with the virtual machine, the PME, instead of a native compiler, and why the trade off between portability and speed was the right decision at that time, you have to consider exactly that, the time when it was taken.
At that time, there were a large plethora of different computer systems. Many of them had similar capabilities, but they were still incompatible with each other. It was obvious that this state of affairs would continue for decades, since to change that would take that some manufacturer would come up with a computer architecture, that was good enough so that everybody would embrace it, and also was fully open and documented, so that everyone could make hard- and software for this non-existent computer. But the only corporation that perhaps could pull off such a stunt would probably be the big blue, IBM. And if you could take a few things for granted, then it was that
- IBM would never introduce a computer based on microprocessor.
- IBM would never tell you all the details about the innards of such a computer.
- IBM would never hire a failed university student, who was hardly dry behind his ears, to write the operating system for such a computer.
Hence, a snowball in hell would have better odds for survival than the thought that there would be a "universal computer architecture" that everyone could have, in the office and at home, for which programs could be developed with the aim to make them as efficient as possible, but without having to worry about these programs being possible to run on some other computer system.
I concur with the voices above, that says that a CP/M based on a TMS 9900 is a nice exercise, but rather pointless today. There are quite a lot of programs in the old USUS base that could run on the 99/4A, if that's what you want to accomplish. The TI 99/4A is hardly used for productive things today anyway, so this is for fun. Why not take on the task to actually do the NCG for the p-system in the 99/4A instead? That would speed it up, perhaps by a factor of five, which is quite a difference.