Jump to content

Photo

Has 'C' replaced assembler as the programmers preferred language of choice


63 replies to this topic

#1 carmel_andrews OFFLINE  

carmel_andrews

    Quadrunner

  • 13,297 posts
  • Location:from somewhere, anywhere and no where

Posted Wed Aug 8, 2012 7:03 AM

Seeming as though these days probably 95 or 100 percent of every game or non game program you see on the software shelves or that you download is written in c (or c++), has the end come for traditional heavyweight coding languages like assembler or ML/MC

I guess though the C (or C++) will still be the hot language in say 100 years time since it pretty much pioneered the concept of 'scripting' languages, like most web based languages i.e java, asp, html, php etc) as well as the likes of python or Boo (which i also understand originated on unix/linux based systems) though i guess that compared to the likes of assmbler (or ML/MC) C/C++ does have the advantage of being totally code transferable/transportable, i.e you can run the same program (with minor changes) on say an a8 using c and transfer/transport the code and have it run (with minor changes) to say an amiga, pc, mac or that modern gaming cack, whereas with assember you would firstly need to know the equivalent instructions on the machine/system you wanted to port/convert the program over to

I am guessing that the likes if C/C++ started overtaking the likes of assembler/or ML/MC etc as the games or software industries preferred programing/coding langauge at about the same time that games development/publishing had it's seismic shift away from computers to gaming/mobile gaming systems, which if i recall rightly started in the mid 90's (i.e about the same time the saturn/n64 and playstation were launched), since most of the games development prior to those systems was largely done using assembler, even if the odd jaguar or snes/md or lynx game were coded in c, the majority were still coded in assembler

The question is though, is c/c++ a better development tool/system then the likes of assmbler. is it easier to learn...and i guess with better support for the likes of c/c++ compared to say assmebler, online it's a lot easier and quicker to get programming in the likes of c/c++ since there's probably a lot more material available online as well as offline for the likes of c/c++ i think the likes of assembler (or ml/mc) will be hard pushed to try and regain it's former glory and become again the preeminent programming language of choice like it once was

Edited by carmel_andrews, Wed Aug 8, 2012 7:14 AM.


#2 Bryan OFFLINE  

Bryan

    Quadrunner

  • 8,282 posts
  • Cruise Elroy = 4DB7
  • Location:Puriscal, Costa Rica

Posted Wed Aug 8, 2012 7:06 AM

Why is this in the A8 forum?

#3 Rybags ONLINE  

Rybags

    Quadrunner

  • 12,189 posts
  • Location:Australia

Posted Wed Aug 8, 2012 7:08 AM

Assembler started to fall out of favour early in the 16-bit era.

Still, most serious game dev relied on Assembler, at least in part, but most arcade type games on the likes of Amiga/ST were in Assembler.

These days, Assembler barely gets a look in. For a game you might find several critical tight routines coded in Asm with the remainder in C or even another high-level language like Delphi.

At the system level, Assembler might be used in some critical OS and driver portions but probably makes up a miniscule percentage of the overall codebase.

ed - yeah, why in the A8 forum. Things haven't changed much on our old gear. Forget about doing much of significance with a high-level language, most action gaming is still banged out in 6502 Asm.

Edited by Rybags, Wed Aug 8, 2012 7:09 AM.


#4 theloon OFFLINE  

theloon

    Quadrunner

  • 6,940 posts

Posted Wed Aug 8, 2012 7:35 AM

Most people who grew up with these machines are used to programming directly on them. People coming into the homebrew scene started on Intel PCs with Visual C++ and Eclipse. C dev on cross compilers should be the way to go nowadays but not so much. Hopefully cc65 will continue to mature.

#5 carmel_andrews OFFLINE  

carmel_andrews

    Quadrunner

  • Topic Starter
  • 13,297 posts
  • Location:from somewhere, anywhere and no where

Posted Wed Aug 8, 2012 8:08 AM

Why is this in the A8 forum?







Well considering that most of the programmers within the atari 8bit section are mostly assembler programmers and considering and some on them and other programmers that frequent the a8 section also program in C or C++ this was largely aimed at them

Going by your questioning of my post in the first place, i am guessing/assuming that you had a problem with it, therefore i suggest you contact the people behind this site and request that they move the post/thread to the appropriate section

#6 sack-c0s OFFLINE  

sack-c0s

    Stargunner

  • 1,097 posts
  • Location:Kingston Upon Thames, UK

Posted Wed Aug 8, 2012 8:37 AM

the thing with C is it is such a bare-bones language it's damn near a portable assembly language.

I use it at work, but everything we code has to run on ARM and x86 these days and writing everything twice isn't an option. Being graphics driver code the optimised assembly is the stuff that the GPU itself runs, so most of what the code actually does is kick off a bit of hand written or cross-compiled assembler on another chip therefore isn't the code we really need to worry about optimising anyway.The time not spent on hand-coding x86/ARM to prod the hardware registers on a once-per-draw operation is time better spent on tightening up microcode that is run on a per-pixel basis.

That and the usual caveats - optimising compilers are almost unbeatable these days, choosing the right algorithm in the first place is 90% of the battle, etc.

#7 danwinslow OFFLINE  

danwinslow

    Stargunner

  • 1,910 posts

Posted Wed Aug 8, 2012 9:23 AM

^ this
For the most part, C is really just a very advanced macro assembler. If you code carefully and do not use any of the the C 'library' ( ie., stdlib, stdio, etc. ) what you get is basically close to straight assembly. Its *never* going to be as efficient as hand coded assembly, but it can get pretty damn close.

Edited by danwinslow, Wed Aug 8, 2012 9:24 AM.


#8 Bryan OFFLINE  

Bryan

    Quadrunner

  • 8,282 posts
  • Cruise Elroy = 4DB7
  • Location:Puriscal, Costa Rica

Posted Wed Aug 8, 2012 11:23 AM


Why is this in the A8 forum?

Well considering that most of the programmers within the atari 8bit section are mostly assembler programmers and considering and some on them and other programmers that frequent the a8 section also program in C or C++ this was largely aimed at them

Going by your questioning of my post in the first place, i am guessing/assuming that you had a problem with it, therefore i suggest you contact the people behind this site and request that they move the post/thread to the appropriate section

It was (originally) worded like a generic computing question. That's all.

#9 flashjazzcat OFFLINE  

flashjazzcat

    Quadrunner

  • 7,773 posts
  • Location:United Kingdom

Posted Wed Aug 8, 2012 11:57 AM

If not pertaining directly to the A8 (in which case it is in the wrong forum), then the original question has to be from the past, surely...? And if referring solely to the A8: yeah, C cross-dev tools are good now, but since 6502 is (comparatively) so easy to grasp, it's as compact as you can get, and faster than even the most highly optimised C code, C can't be seen as a "better" development option than assembler. It depends what you mean by better, of course. Yes, C is easier to learn, offers a faster development cycle, and you can transfer your coding skills to just about any modern platform.

Nice thing about C on the A8, I suppose (assuming you know assembler), is that you can compile something in C and then hand-tweak the compiler output before assembling and linking it. Nothing beats pure assembler on the A8 for me, though, especially now that source written for MADS is starting to look more and more like C (or some intermediate-level variant thereof). And as far as getting bigger projects off the ground in asm: I've said it before, but once you have a decent library of common functions (open file, print to screen, etc), it's hardly more of an undertaking than writing something in BASIC.

#10 snicklin OFFLINE  

snicklin

    Stargunner

  • 1,095 posts
  • Location:UK

Posted Wed Aug 8, 2012 12:14 PM

I used to be a professional 'C' programmer, programming for HP around 10 years ago. So obviously you can tell that I have a love for 'C'. It is a wonderful language, it is quick and as you learn more and more of it, it is very sexy and I do not say that lightly.

If you ask me, if you're programming a game for a PC, it (or perhaps C++) is/are the language(s) to use for development.

A couple of years back I embarked on a programming project in 'C' on the Atari using CC65. However in the end, I found that the code was too slow for an Atari, even when optimised as much as I could do so.

So now I'm working in 6502 Assembly and to be quite honest, it's a whole lot easier to learn than C and the concept of a pointer is so much easier to use in Assembly. And the code is sooo much quicker, especially due to some of the kind assistance that I have received on here.

#11 snicklin OFFLINE  

snicklin

    Stargunner

  • 1,095 posts
  • Location:UK

Posted Wed Aug 8, 2012 12:17 PM

once you have a decent library of common functions (open file, print to screen, etc), it's hardly more of an undertaking than writing something in BASIC.


Absolutely. I'm working on a common functions library at the moment as I feel that it will all pay off later. I want to avoid reprogramming everything each and every time.

#12 JamesD OFFLINE  

JamesD

    River Patroller

  • 4,500 posts

Posted Wed Aug 8, 2012 12:29 PM

'C' will never completely replace assembly, but it certainly has done a lot to limit the role of assembly to drivers and speed critical sections of code.

This is a bit beyond the original topic, but compilers in general probably would have replaced assembly much sooner if compiler technology had advanced earlier. I don't think people saw a need for well optimized code until the rise of personal computers and workstations, and the first PCs had such limited memory that assembly was mandatory.

If you look at the history of 'C', it was still in development in the early '70s. Kernigan & Ritchie's book 'The C Programming Language' didn't even come out until '78 and I don't think the language was used outside of Unix much before that.
People didn't start to adopt 'C' on 8 bits until Small-C was published in 1980 and it still took a back seat to Pascal since Pascal had been around since 1970 and Tiny Pascal had already been published in 1979. Pascal continued to be favored over 'C' until code optimization improved and universities started teaching the language. When I learned 'C' it was actually as part of a Unix class in '86... it wasn't even a separate class yet.

FWIW, Z80 extensions were added to Small-C shortly after it's release and a 6809 version was published for the Flex OS by '82 or '83.
I don't think Small-C was ported to the 6502 until the late '80s, the earliest date I found was '89 but I think a couple commercial 'C' compilers based on it were released in the late '80s before the public code release.

Really, all 8 bit compilers generated horrible code in those days. I remember using a commercial compiler for the Z80/HD64180 in the early '90s and the company I worked for had me write a peephole optimizer for it that reduced the code size by about 30%... and that was the better of the two compilers we had.

Code optimization was a little more advanced on 16 bit machines. I remember Lattice 'C' for the 680x0 had peephole and global optimizers in the late 80s. But clock speeds were still so slow that there was an obvious difference in speed between 'C' and assembly. Any time critical code for the Amiga still had to be in assembly. Still, it wasn't until GCC came out that 680x0 code generation was really good... and it's still not as fast as assembly.

Ultimately, the growth in the size of programs has necessitated the use of compiled languages instead of assembler.
Compilers had already been in wide use on mainframes but on 8 bits, you had limited memory and poor compiler output... which made compilers unattractive.
But on 16/32 bit machines, the memory limit wasn't such an issue and faster CPUs have made hand optimization less important.
Over time, the improvement in compilers has even made the use of compilers more attractive even on 8 bit machines.
In the case of '90s videogames, clock speeds were still too low to abandon assembly. Let's face it, videogames have to push the limits of the hardware and every clock cycle counts.

I think assembly is still a valuable skill and it is definitely a necessity for certain programs, but it certainly isn't as desirable as it was in the past.
High level languages are easier to learn than assembly and object oriented programming in particular certainly has a lot of advantages.
At the very least, assembly requires more discipline to write good code than higher level languages do. Assembly is also harder for a team to maintain, largely due to the increase in code size and due to the fact that it's more difficult to understand.

Ultimately, it depends on what machine you are writing code for and what the application is. I've had to write embedded code for a 4 bit processor that ran at a few kilohertz and with only a few bytes of RAM. A compiler wasn't even an option and there will probably always be applications like that. But for a multi GHz/GIGABYTE machine... not so much and when assembly is required these days, it's probably only 1 or 2 developers of a 20 person team. Indie developers writing 8 bit games would of coarse be an exception to that.

#13 ivop OFFLINE  

ivop

    Chopper Commander

  • 211 posts
  • Location:The Netherlands

Posted Wed Aug 8, 2012 2:45 PM

Do not forget that the amount of data to be processed has also increased enormously. A single cycle faster per pixel faster in assembly can mean the difference between being able to play a H264 encoded video on your hardware or not being able to. Look at FFmpeg (a cross-platform multimedia project), there's still a lot of assembly code in there for the speed criticical stuff.
Also, for a long while, most compilers did not take any advantage of SIMD instructions.

As for 8-bit c-compilers, IMHO the biggest mistake is to switch to a "simulated" stack. Just use the CPU's stack for return addresses, make function variables static (i.e. fixed memory addresses, not on the stack) and forget about recursion. Most of the time, you don't need it, and if you do, program around the compiler's limitation. The resulting code will be way faster than sdcc, cc65, etc...

Edited by ivop, Wed Aug 8, 2012 2:45 PM.


#14 JamesD OFFLINE  

JamesD

    River Patroller

  • 4,500 posts

Posted Wed Aug 8, 2012 3:40 PM

As for 8-bit c-compilers, IMHO the biggest mistake is to switch to a "simulated" stack. Just use the CPU's stack for return addresses, make function variables static (i.e. fixed memory addresses, not on the stack) and forget about recursion. Most of the time, you don't need it, and if you do, program around the compiler's limitation. The resulting code will be way faster than sdcc, cc65, etc...

I think people were more worried about overflowing the 256 BYTE 6502 stack than speed. It's the only CPU where compilers even need to use that approach but it would be nice to at least have a choice of how the stack is handled.

SDCC doesn't support the 6502 btw.

#15 flashjazzcat OFFLINE  

flashjazzcat

    Quadrunner

  • 7,773 posts
  • Location:United Kingdom

Posted Wed Aug 8, 2012 6:36 PM

I know it has a software parameter stack, but surely CC65 uses the hardware stack for return addresses anyway?

#16 JamesD OFFLINE  

JamesD

    River Patroller

  • 4,500 posts

Posted Wed Aug 8, 2012 7:31 PM

I know it has a software parameter stack, but surely CC65 uses the hardware stack for return addresses anyway?

I've never looked at it's output to check so someone else will have to answer that one.
I don't know how much stack space the Atari normally has in use but I'd think you should have over 100 levels of calls to play with.
You'd have to use recursion or almost intentionally try to overflow the stack with a program that fits in an 8 bit's address space.

#17 kenjennings OFFLINE  

kenjennings

    Moonsweeper

  • 411 posts
  • Me + sio2pc-usb + 70 old floppies
  • Location:Florida, USA

Posted Wed Aug 8, 2012 7:48 PM

6502 has only a 256 byte hardware stack right?

Figure an average of three arguments passed by value (1 or 2 bytes each) or by address (2 bytes each) and then a return address.
That's in the general neighborhood of 30+ levels... still likely difficult to reach in a real world situations (without running Microsoft Windows ;-).

#18 JamesD OFFLINE  

JamesD

    River Patroller

  • 4,500 posts

Posted Wed Aug 8, 2012 9:30 PM

6502 has only a 256 byte hardware stack right?

Figure an average of three arguments passed by value (1 or 2 bytes each) or by address (2 bytes each) and then a return address.
That's in the general neighborhood of 30+ levels... still likely difficult to reach in a real world situations (without running Microsoft Windows ;-).

Local variables would also be stored on the stack, not just parameters, and you are assuming you can use the entire stack.
You'd have to write some very different code to get around allocating variables on the stack but then some theatrics are often required to conserve memory on 8 bit systems anyway.

There is also an issue I had forgotten. The 6502 does not support stack relative addressing so the code is going to be accessing variables on the stack in the same manner either way. You gain the ability to push parameters on the stack but then you have to copy the stack pointer to page zero and index off of that to access them. You also have to adjust the stack pointer to reserve room for local variables and on a return. I'd have to look at the code required but I'm not sure you'd gain much if any speed.

Now, if you have a 65802 or 65816, you have up to 64K of stack space (minus space for your code, OS, etc...) and stack relative addressing so all this ugliness goes away.

#19 Chris Crawford OFFLINE  

Chris Crawford

    Space Invader

  • 25 posts

Posted Wed Aug 8, 2012 9:34 PM

When programming for the 6502, the ability to make use of zero page is of enormous importance, and few C compilers can handle those allocations efficiently. I see no reason why anybody programming on a 6502 machine would not use assembler.

#20 phaeron OFFLINE  

phaeron

    Stargunner

  • 1,110 posts
  • Location:USA

Posted Wed Aug 8, 2012 11:00 PM

C emphasizes pointers and the stack... which are two big weaknesses of the 6502. The 6502 also relies a lot on its addressing modes which the existing C compilers aren't good at taking advantage of. In theory, a modern compiler with a global optimizer and link-time code generation could do a much better job by marking non-recursive paths, pushing locals/arguments to global variables, and identifying when indexing address modes can be used directly. Anyone working on a 6502 target for LLVM? :)

GCC did do a better job than Lattice C on the 68000, but I wouldn't call it great. I used it when working on a Dragonball (68K core), and at the time GCC still didn't know common tricks like push word + pop byte for shifting by 8 bits. IMO, speed wasn't the big problem with C on the 6502, though -- it was the much lower tolerance for the increased code size and the inability to realistically host a C compiler. Apparently Aztec C did run natively on the Apple II, but I imagine it probably ran on a geologic time scale and did no optimization.

#21 Shawn Jefferson OFFLINE  

Shawn Jefferson

    Stargunner

  • 1,700 posts
  • Location:Victoria, Canada

Posted Thu Aug 16, 2012 11:36 PM

cc65 (www.cc65.org), has the concept of register variables (6 bytes of them), so you can use those to help speed up your functions. It also has options for:

fastcall (parameter passing/returning in registers-assembly code only! using this for C functions does something else which saves some space, not speed)
static locals

Another optimizing option is to define variables as zeropage, or BSS/DATA globals which reduces the overhead of stack manipulations.

Personally, I think cc65 is a great way to whip up some code on the A8. You then can hand-optimize in assembly, or you can optimize the C code quite a bit as well using some simple techniques.

PS. cc65 uses the hardware stack for return pointers using the normal jsr/return assembly commands. There's also a special C stack that can be anyway in memory and any size you think will work with your program (with a custom linker configuration file of course.)

Edited by Shawn Jefferson, Thu Aug 16, 2012 11:36 PM.


#22 potatohead OFFLINE  

potatohead

    River Patroller

  • 4,161 posts
  • Location:Portland, Oregon

Posted Fri Aug 17, 2012 1:54 AM

I found this thread kind of interesting and went off looking. Found this:

http://www.dwheeler.com/6502/

Interesting read, and some toys to check out too.

I have a comment or two, just because it's late and I'm struggling with a project, hoping a little distraction or two helps get the juices flowing.

On old computers, modern tools have helped a lot, but really I see them helping make better assembly language code and or adapt new hardware to old boxes more than I see them replacing Assembly Language. The older machines really are too lean in most cases, and on the 6502, it's *hard* to get higher level code compiled down to compete with what a good programmer can do with the chip.

I suspect that's why it's still popular and fun too. 6502 hacking is a great challenge. Always will be IMHO.

On some 8 bitters, C actually made sense. It did for the Apple, though it took a while. PASCAL was useful pretty early on too. My first C program was on an Apple computer and it was insane. Did it on an Apple setup with four disk drives. The school set that one up after somebody spent more time swapping disks to edit, compile, link, and build the executable than they did coding. Back then, it was a "gee whiz" moment as I got a chance to experience a little of what people were doing on more powerful machines at the time, but that's about it. Went back to PASCAL, where it actually worked much better and leaner.

CP/M got a C environment that was capable.

Tandy CoCo computers got FLEX and then OS/9, which included some pretty powerful OS / Development tools, which included C. A CoCo 3 can actually run a fairly sophisticated C program, due to it's CPU being a lot more capable, and a memory environment that delivered the room needed for it to actually be workable.

Assembly ruled the day for a ton of stuff though. Still does. Always will on smaller systems, though some pretty great projects are out there. Our own Batari Basic is impressive! It's a nice BASIC that compiles down to pretty damn solid 6502 code. Wish I had the skill to port that a few places. Love the idea of it. Really powerful macro assemblers are where it's at these days. I see way too many projects happening in assembly, particularly if they are graphics / sound related things. C just isn't there.

Maybe somebody can comment on the Z80. There is a C for the ColecoVision. That's kind of amazing, though it's not self-hosted, using a PC. C on the Apple is self-hosted, though it's tough compared to cross development using a PC.

If one is programming right on the machine, I would think C is just a PITA. It's way too thick to make a lot of sense on any machine besides the CoCo that I am aware of. Maybe Apple, but that's a stretch too, and really all about a very well equipped one, or CP/M, and there's the Z80 again.

Programming right on the machine means BASIC, Assembly Language, and some custom tools maybe. Probably.

Today?

C is still the serious systems language. Lots of stuff is out there, and it's getting used too, but C is the foundation. Very relevant and not going anywhere anytime soon.

Assembly language isn't going anywhere either. It's not like people are writing applications in it. Higher level stuff is how that's going to get done, and in some cases very high level stuff, which is another bloat discussion. .Net? Thick... but then again, kind of effective, so long as people don't care too much about resources, which a lot of them don't because hardware is just cheap and big these days.

12 GB on the laptop folks. 12 e-ffing GB!! Holy crap. Gotta not think too hard about that.

Assembly is where the real wizardry often happens. New hardware and lower level programming happens in Assembly Language a lot. It also happens when hardware gets pushed hard. Encapsulate some assembly code into a greater application and a big boost can be had. Anything that has a BIOS or similar thing probably got authored in assembly language, and there are always custom libraries for this and that authored in the same way.

No matter what, we always need people to boot strap new devices and that means Assembly, and shortly after it means C, then it usually means a lot of other stuff, if it makes sense to do that.

One common thing is to take the new device and use Assembly Language to build up GCC for it. Once you've got that, you can then go off and build lots of languages and software. But that's all today.

In retro land, I have to say NO. Assembly is king. For a lot of stuff, it needs to be. There are interesting projects still attempting to push the edge though. Most are academic, but the Short language for Apple ][ demonstrated this year at KFest is really intriguing. It's not C, more like BASIC, but it's built to compile down to Assembly Language, and it's built for development right on the machine too, and it allows inline Assembly. Kind of the Batari Basic for the Apple, with support for the extra things it does compared to a VCS.

Ok, I'm going back to my project now. Ignore the ramblings....

#23 sack-c0s OFFLINE  

sack-c0s

    Stargunner

  • 1,097 posts
  • Location:Kingston Upon Thames, UK

Posted Fri Aug 17, 2012 2:46 AM

Z80 strikes me as being much better for C. The 16-bit stack helps a lot (and would make multitasking much easier too), the extra registers in place of the zeropage and the IX/IY registers can be used for struct pointers.

I think for me C in 6502 would be a case of fleshing out and validating an algorithm, and then reducing it down by hand to what it should be

#24 flashjazzcat OFFLINE  

flashjazzcat

    Quadrunner

  • 7,773 posts
  • Location:United Kingdom

Posted Fri Aug 17, 2012 4:09 AM

Interesting ideas and opinions regarding CC65's software stack implementation discussed here.

#25 JamesD OFFLINE  

JamesD

    River Patroller

  • 4,500 posts

Posted Fri Aug 17, 2012 10:32 PM

cc65 (www.cc65.org), has the concept of register variables (6 bytes of them), so you can use those to help speed up your functions.

That's also used in the GCC 6809 compiler. I would think that page 0 variables yield better results due to less shuffling in and out of page zero but you can't put everything in page 0.

It also has options for:

fastcall (parameter passing/returning in registers-assembly code only! using this for C functions does something else which saves some space, not speed)
static locals

Given what the 6502 has for registers, I can see why it's only for assembly. It would greatly simplify writing some assembly code for use with the compiler.

Another optimizing option is to define variables as zeropage, or BSS/DATA globals which reduces the overhead of stack manipulations.

That probably has the greatest potential to optimize code.
This would work really well on the 6809 with it's movable direct page and stack relative addressing. I think the 65816 would also benefit a lot for the same reasons as the 6809 though the code would be slightly larger.

Personally, I think cc65 is a great way to whip up some code on the A8. You then can hand-optimize in assembly, or you can optimize the C code quite a bit as well using some simple techniques.

Agreed. I've seen some excellent programs written in C for the 6502, many of which don't even require further optimization.

I would think the code generated would be a bit large though due to the amount of register swapping and stack management. Smart use of cc65's use of zero page would certainly help a lot.

PS. cc65 uses the hardware stack for return pointers using the normal jsr/return assembly commands. There's also a special C stack that can be anyway in memory and any size you think will work with your program (with a custom linker configuration file of course.)

It's the handling of that special C stack that adds a lot of overhead. If you have a program of any size you can't stick everything in page 0 so you'll be stuck using it.

cc65 is certainly a very useful tool and there's no reason you can't write entire apps in it, but it lacks many ANSI items, doesn't have more advanced optimizations, and due to the difficulty of generating optimal 6502 assembly it's slower than it needs to be. It's not a full replacement for assembly but not everything needs to be in assembly.

FWIW, I figure 65816 C output would be 30% smaller than 6502 code just due to 16 bit support and stack relative addressing support. 65816 code would also be noticeably smaller and faster.




0 user(s) are browsing this forum

0 members, 0 guests, 0 anonymous users