Jump to content
IGNORED

Z80 vs. 6502


BillyHW

Recommended Posts

On the Apple screen layout...

I wrote some routines to print text on the hi-res display.

I came up with two ways to do it. One printed a character at a time and the other printed one scan line of the text at a time.

I opted for the first method but I think it was actually faster to print one scan line at a time.

  • Like 1
Link to comment
Share on other sites

I do too. Fast screen writes are big pile of code and tables...

 

Bummer too, because the CPU did not have to wait for DMA, which is a nice speed boost somewhat negated by the funky addressing.

 

A linear screen would have made the most of that, but Woz wanted to save a chip... Still, I really like Apples and they did well all things considered.

  • Like 1
Link to comment
Share on other sites

I was searching for an old magazine that had code comparisons between different CPUs of the day and ran across this:

 

The 6502 was released to the market in September 1975 at $25, while at the same show the 6800 and Intel 8008 were selling for $179. At first many people thought it was some sort of a scam, but before the show was over both Motorola and Intel had dropped their prices to $79. Instead of saving them, the 6502 was now legitimized and started selling by the hundreds.

http://www.factbook.org/wikipedia/en/m/mo/mos_technologies_6502.html

 

 

One comment I found repeatedly on this topic that favored the 6502 is low interrupt latency, which is true. *IF* you used the alternate register set on the Z80 for interrupt handling you could drastically reduce latency over the 8080, but as long as you didn't need a lot of code in your interrupt handler, the 6502 is sure to beat the Z80. Some of the 6502's lead would be reduced by 16 bit manipulation. The 6803... I haven't really looked at latency but the 6800 wasn't bad. The 6809 has a fast interrupt mode that reduces latency if you need speed and once again is pretty tough to beat.

Link to comment
Share on other sites

Fascinating stuff.

 

It would be interesting to know what relative prices of these two chips were over the years.

CMOS chip technology had only been introduced into production in 1968. In the early to mid 70's they were probably still working out some of the kinks in manufacturing. When you think about it, it wasn't even possible to make some of these chips a few years earlier. LSI (Large Scale Integration) didn't exist until about the mid 70s (73 or 74?) and VLSI didn't come about until the 80s. (if the timeline I saw is correct)

 

Between the mid 70s to mid 80s, chip production improved drastically (die size and yield) during that period and quantities of parts companies purchased shot up dramatically as the PC market took off.

Just as important, higher density DRAM chips were introduced at lower prices per KB, support chips dropped in price, and circuit boards shrank in size.

 

If the 6502 was $25 when it was introduced, you can bet it was $5 or less in quantity by the early '80s.

That sounds like significant savings, but cutting the size of a circuit board, the number of drill holes, and the number of parts can dramatically cut the price of the machine.

Really, how much difference in the cost of a machine is there going to be when one CPU is $5 and the other $10 or even $15?

That's where machines like the Spectrum and Oric came in. Cut the parts count and the board size.

Just think how many TRS-80s Tandy could have sold if they had introduce a Model I clone with a custom chip and that size form factor. It probably would have outsold the ZX-80/81.

 

One very important thing to remember. Since Commodore bought MOS, they basically got chips pretty much at cost where everyone else had to pay a markup.

I can't remember the exact amount but by late in the C64's life I thought the machine cost between $35 and $50 to produce.

That's barely more than the CPU was when the Apple I was introduced!

  • Like 1
Link to comment
Share on other sites

I believe a lot of the reasoning behind going with a Z80 or 6502 was related to the engineers and designers of these systems and what they were experienced with. The UK computer scene only really took off with the ZX80 then ZX81 which were a few years later than the Apple II and Commodore PET in the States. I remember reading some of the history behind the Amstrad CPC and aparently it was originally destined to have a 6502 however there was more Z80 expertise around (at least with the team hired to design the system) so they went with the Z80

 

Either way it really is a shame (and I feel its held back progress) that it was the IBM-PC design that ended up the defacto standard. How would the industry have looked if the Commodore 64 had evolved like the PC or Apple had put more effort into the IIGS for example?

 

Barnie

Edited by barnieg
  • Like 1
Link to comment
Share on other sites

Without the IBM PC on the market you have to wonder if there would have been a Macintosh, Amiga, or Atari ST.

 

The C65 was certainly interesting.

 

Just what would Atari have done?

 

If you look at the MSX market, an MSX2 machine included a 6MHz 64180 as a 2nd CPU and then the TurboR was introduced. Maybe MSX would have been popular in the US without IBM.

 

With Apple's full backing you have to think the IIgs would have been faster at the very least.

 

Tandy wouldn't have been building PC clones so it would have focused more on updating their other machines and Motorola probably would have gone ahead and introduced the RMS chipset.

With the RMS chipset the CoCo 3 could have had sprites, a 4096 color palette, bit plane graphics with up to 32 colors at once, a screen buffer larger than the display that can be scrolled just by setting registers, higher res color graphics, and would have addressed up to 1Meg of RAM.

Just think of the battle that would have set up in the market. The RMS chipset was also designed to interface to the 68000. Would Tandy have introduced a 68000 machine based on it? It certainly wouldn't have been their first 68000 machine.

Link to comment
Share on other sites

If you look at the MSX market, an MSX2 machine included a 6MHz 64180 as a 2nd CPU and then the TurboR was introduced. Maybe MSX would have been popular in the US without IBM.

Just for the records, everything from MSX up to MSX2+ used only a single Z80 CPU. I'm wondering were Hitachi HD64180 was used.

 

When MSX came up to the United States, the Colecovision already had the market, same graphics capabilities and MSX had the price too high, so it couldn't have been successful.

Link to comment
Share on other sites

Just for the records, everything from MSX up to MSX2+ used only a single Z80 CPU. I'm wondering were Hitachi HD64180 was used.

The JVC HC-90 & HC-95 had a 64180 as a 2nd CPU. I believe you switched which CPU was running and you weren't running both at the same time.

 

When MSX came up to the United States, the Colecovision already had the market, same graphics capabilities and MSX had the price too high, so it couldn't have been successful.

I can't say for sure what would have happened but MSX integrated the "MSX Engine" into a single chip eventually, which led to cheaper machines.

MSX was certainly popular in South America. I'm not sure the machine was specced high enough for the US market but without the 16/32 bit machines, who knows what would have happened. There certainly would have been more opportunity for MSX.

Why the Colecovision and Adam failed has been discussed repeatedly but they where definitely costly to produce and I'll never think the internal tape drives was a good idea for the Adam. Coleco couldn't use the MSX Engine chip in their systems so they certainly would have been at a disadvantage vs MSX in the long run.

 

MSX and Colecovision would have required some sort of upgrade to compete with faster and more powerful 8 bits from other manufacturers. I think that much is clear.

Ultimately, all machines would have had to migrate to faster Z80s, 65816s, or whatever to compete.

Edited by JamesD
Link to comment
Share on other sites

I programmed both in the day and I can't pick a favorite. Each felt different when coding. 6502 is more RISCy.

 

BTW did you know the Z80 only has a 4-bit ALU inside? There's no performance penalty for 8-bit operations because all instructions have enough clock cycles anyway, but 16-bit operations do suffer. See p. 10:

http://archive.computerhistory.org/resources/text/Oral_History/Zilog_Z80/102658073.05.01.pdf

Edited by ClausB
Link to comment
Share on other sites

The 6502 is "RISCy"?

 

There is more to RISC than a "Reduced Instruction Set". RISC has other key features.

 

Does the 6502 use a small set of highly optimized instructions and the CPU does not use microcode?

Yes. This is pretty much the only criteria that the 6502 meets as far as RISC goes.

 

RISC uses of a large number of general purpose registers and uses general purpose instructions.

That doesn't appear on the Wiki but if you search for 'RISC general purpose registers' you will find plenty of articles that use that as part of the definition of what RISC is.

The idea is that all instructions work on all registers and you have many of these registers. You'll find the word orthogonal used.

The 6502 register set is small, registers are special purpose, and instructions are specialized by register.

The 6502 isn't even close here.

 

RISC uses a load and store architecture, meaning memory is accessed through load and store instructions ONLY.

For example, instructions like ADD do not access memory. You load values into registers, add them, then store them back to memory in separate instructions.

A large part of the 6502 instruction set does not meet this standard and the 6502 clearly does not even attempt to use this design philosophy.

 

While maybe not part of the Wiki definition of what a RISC CPU is, it is a common RISC feature to execute instructions in a single clock cycle by the use of a simple pipelined architecture. The 6502? Nope... and it would not benefit from pipelining at all. Yes, I have even seen claims on the internet that the 6502 is pipelined, but it does not execute portions of multiple instructions at once AT ALL. That claim is completely false and whoever posted it clearly doesn't understand pipelining. Early "RISCy" CPUs didn't use pipelining either but I included this because of the claim about the 6502 and RISC design philosophy isn't something that someone just came up with overnight. RISC evolved as a series of design concepts and one day someone decided to call it RISC. I believe that was well after the initial concepts of RISC design had been used and I believe pipelining was already in use by then. The older CPUs using similar design philosophy were called RISC retroactively.

 

So is the 6502 "RISCy"? No. While the 6502 isn't microcoded and it uses a fast, simple instruction set; at the very least it's registers aren't general purpose, and it doesn't use a load and store architecture. To be honest, RISC takes simple to another level as far as the instruction set and register use goes.

There really isn't a term like CISC or RISC that fits the 6502 that I'm aware of.

If I were to make up a term I'd probably call it SISC. Simple Instruction Set Chip. And I'd define SISC as a chip with a small number of registers, simple instructions, and a non-microcoded implementation. Which fits the 6502 perfectly.

  • Like 1
Link to comment
Share on other sites

I just meant it feels more like RISC than the Z80.

 

No pipelining? There you're wrong. Look at the original MOS manual, section 5.1, Concepts of Pipelining and Program Sequence. It describes the overlap of memory fetching and interpretation of previous data. Only a 2-stage pipe but a pipe no less. It's why a branch taken wastes a third cycle fetching the next opcode it won't use. It's why indexed addressing with page crossing wastes a cycle fetching data from the wrong page. It's why read/modify/write instructions waste a cycle writing bad data before writing good data.

Link to comment
Share on other sites

I suppose "RISCy" just means, closer to RISC than to CISC.

Technically speaking, the 6502 *is* CISC, just not a typical example of it.

 

A CISC CPU doesn't have to be microcoded, it just has to have complex instructions that do multiple operations with a single instruction like load and add.

While much of the 6502 instructions have similarities to RISC, the advanced addressing modes are clearly CISC.

 

If you look at loading the accumulator by indexing off of a page zero address:

LDA (pointer),Y

 

That is one instruction.

 

A RISC chip might do the following set of instructions to do the same thing (this is a theoretical example and probably a step or two longer than a real chip):

Load A0,#pointer ; Load the page zero address into a register.

Load A1,(A0) ; Load the value at that address into a register.

Load A0,Y ; Load the index value into a register.

Add A1,A0 ; Add the index value to the other value.

Load A0,(A1) ; Load whatever is at that address into a register.

 

While the steps are the technically the same, the 6502 does them with one instruction and RISC uses multiple instructions.

 

 

ClausB,

Hmmmm, I didn't realize an instruction prefetch was considered a pipeline. FWIW, the info I found called it the first 1 step instruction pipeline.

Link to comment
Share on other sites

If you look at loading the accumulator by indexing off of a page zero address:

LDA (pointer),Y

 

That is one instruction.

 

A RISC chip might do the following set of instructions to do the same thing (this is a theoretical example and probably a step or two longer than a real chip):

Load A0,#pointer ; Load the page zero address into a register.

Load A1,(A0) ; Load the value at that address into a register.

Load A0,Y ; Load the index value into a register.

Add A1,A0 ; Add the index value to the other value.

Load A0,(A1) ; Load whatever is at that address into a register.

It would be CISC if pointer could be any address, but it is limited to zero page. So if you consider the 6502 zero page being an extended register set (like I do) it looks very different, because the setup code required has to be considered too.

 

IMO LDA (pointer),Y replaces only your last two example instructions.

Link to comment
Share on other sites

Page zero just uses $00 internally as the MSB of the address and allows shorter instructions that take fewer cycles to process. That is all it does. It is NOT an extended register set and that's one thing I'll never agree with. The 65816 even lets you change the value of the MSB for fast memory access anywhere in the first 64K. Are you going to try and tell me that is an extended register set as well? On the 65802 that would be all of the memory it can address. It's called Direct Addressing, not Registers.

 

This has actually been debated repeatedly elsewhere with 6502 fans calling it RISC, some people calling it CISC, and some calling it a bridge between the two.

I am going to quote from a post in one of those topics as it sums up the CISC features of the 6502 well. Feel free to try and explain away all the 6502's CISC features.

 

The 6502 is the epitome of CISC design: it has many, complex addressing modes: zero-page, absolute, accumulator, (indirect,x), (indirect),y, immediate, and various additional combinations. The original 6502 also has a number of unassigned instructions opcodes which try to do two operations at once, sometimes even producing useful results. This was the source of many incompatibilities between the the original and the later revisions like the 65c02, which added new instructions that replaced these "undocumented" opcodes.
Edited by JamesD
  • Like 1
Link to comment
Share on other sites

I never saw it as an extended register set either.

 

Back in the day, most of my peers associated 6502 with CISC. Frankly, given the scope of 8 bit CPU's, it's instruction set is fairly complex. If one looks forward to the 6809, deffo CISC as that complexity is even higher. RISC designs feature less overall complexity and far more consistency in overall instruction applicability.

 

RISC designs, as I understand them, also were intended to help with compilers, where a main feature of CISC is making it more possible for people to author efficient code. Another big difference is the consistency and simplicity allows for more aggressive design in terms of clock speed, etc...

 

A fun discussion! No need for it to get crappy. We can and will differ.

 

On the topic of registers, didn't the MOS data sheet simply refer to this as direct addressing? IMHO, register has a very clear meaning with load - store type designs, and the Z-page really isn't a register set in that context, though people are completely free to treat it as such for indexing, etc...

  • Like 1
Link to comment
Share on other sites

RISC designs, as I understand them, also were intended to help with compilers, where a main feature of CISC is making it more possible for people to author efficient code. Another big difference is the consistency and simplicity allows for more aggressive design in terms of clock speed, etc...

It simplifies compiler design, makes code optimization easier, it's easier to pipeline the processor and run the CPU and a higher clock speed.

 

I didn't realize it until this discussion, but after looking up pipelines and RISC, the two concepts actually have their origins back in the '50s.

 

On the topic of registers, didn't the MOS data sheet simply refer to this as direct addressing? IMHO, register has a very clear meaning with load - store type designs, and the Z-page really isn't a register set in that context, though people are completely free to treat it as such for indexing, etc...

Yup, the 6502, 6800, 6803, 6809, 65816, etc... all call it direct addressing, not registers.

 

 

In my post above I merely used direct addressing as one example of how the 6502 is CISC.

There are clearly many things that indicate the 6502 is CISC.

 

Anyone that's actually looked at RISC code knows my RISC example isn't far from what RISC code actually looks like.

Code may vary from MIPS to PA-RISC to ARM to PowerPC or whatever... but they all break things down into almost ridiculously simple to process steps.

All of those CPUs have several registers and most instructions work the same on them all. You still have some things like MIPS holding the last return address in a special register or other CPU specific optimizations, but the main registers are general purpose.

If you look at a CISC CPU like the 68000, it has separate address and data registers along with special instructions for each.

 

Is RISC better than CISC or CISC better than RISC?

It depends. RISC is usually more memory intensive but easy to process fast and cheaply.

CISC is usually more memory efficient but is slower or more expensive to process fast.

 

Is it a bad thing for the 6502 to be CISC? Or if it were RISC for that matter?

If the 6502 were RISC it would benefit from general purpose registers which would actually be an improvement. It would also be easier for a compiler to generate code for it. But that would be at the cost of memory which was very expensive at the time the 6502 was introduced and I don't think it would be any faster.

Frankly, it's probably better that the 6502 is CISC because 1K and 4K machines ruled the day when the CPU came out. If you think 6502 code is bigger than Z80, imagine if it were RISC! You'd save a little space because of general purpose registers registers, but loosing the advanced addressing would probably increase code size by up to 20%. That is huge in a 1K machine! Just think of all the games that wouldn't fit in 16K, 32K, 48K, or 64K anymore. When RISC hit the desktop, we had moved to 512K or more RAM so it wasn't an issue.

I'm also not sure RISC would even be practical with 8 bit registers. You'd need 16 bit registers to access 64K. You'd almost need a register scheme like the 6808 which isn't very efficient code wise. As a microcontroller that's fine, but as a general purpose CPU, not so much.

 

 

On the issue of whether or not the 6502 is pipelined.

This ultimately comes down to what is a pipeline?

The 6502 breaks the execution of instructions down into separate steps as does a pipeline.

The 6502 has a prefetch that grabs the next byte, just as a pipeline has a prefetch to keep the pipeline from stalling.

But the execution only handles one line of code at a time. At least that appears to be case from what I've read and based on the branch penalty.

Also, if it were a multi-stage pipe you'd have all instructions effectively taking a single clock cycle unless the code encounters a branch and you'd also have greater branch penalties.

So... the designers say it's pipelined, but it looks like their idea of a pipe is a prefetch and decoding one instruction.

This is a bit like Pluto being a planet. Pluto remained a planet until scientists got together and said it wasn't.

As long as scientists aren't saying the 6502 isn't pipelined... I'll go along with it being pipelined.

FWIW, the 6309, 6303, 64180, and Z180 all have an instruction prefetch to shave off a few clock cycles; at least when in their native mode.

I just checked the 6303 manual and it mentions pipeline control and I'm guessing manuals for the other Hitachi CPUs would as well... so pipelined the 6502 must be and I was wrong earlier.

I'll just adjust my concept of what a pipeline is accordingly and what it means for a CPU to have one.

It doesn't mean effectively having 1 cycle per instruction, it just means reducing clock cycles per instruction.

  • Like 2
Link to comment
Share on other sites

I don't see a pipeline until I see self modify code having to account for the pipe. 6502 just does a pre fetch. Exactly like Pluto being a planet. Adding pipeline to the definition doesn't get us anything that pre fetch doesn't already cover. IMHO not a pipeline.

 

CISC and memory use is where 6502 and friends are pretty great. You want that complexity and granularity to make good use of very small memory. Agreed. RISC really doesn't pay off in small memory spaces nor do compilers where people do a much better job.

  • Like 1
Link to comment
Share on other sites

I don't see a pipeline until I see self modify code having to account for the pipe. 6502 just does a pre fetch. Exactly like Pluto being a planet. Adding pipeline to the definition doesn't get us anything that pre fetch doesn't already cover. IMHO not a pipeline.

Oh come on... you know it's a single stage pipeline and you just don't want to admit it.

Or perhaps you think it may be total marketing Bolshevik? Hmmmmm... ?

  • Like 1
Link to comment
Share on other sites

 

The Spectrum basically has a monochrome 256x192 video layout with an added layer of color attributes.

Each byte of the screen buffer is 8 pixels and the color attributes determine what the foreground and background colors are for that byte.

The color attributes control blocks of 8 bytes stacked on top of each other (if I remember right) so you end up with a very blocky looking screen unless you change the attributes on the fly. With only 16 (sometimes less than desirable) colors you get... well you get something that looks a bit like a coloring book where someone didn't stay inside the lines and the artist didn't get the big box of crayons. :)

...

Machines like the Speccy, VZ/Laser, Tandy CoCo, all could have had much better graphics with a little extra logic. But it seems logic was expensive and much more time consuming to develop in those days. The SMS came out in 1986, about 10 years after the first home computers.

 

I think that main reason for such, 'attribute' based graphic systems was cost of RAM - so they made color graphic with low RAM usage. Machine like Spectrum 16K could not be done with some better graphic (palette based) - because video RAM only would be more than 16K.

And similar stays for C64. Where sprite logic and other was present - likely not less complex than some better graphic logic.

And additionally, more RAM means more CPU time to calculate, write graphic. It was the problem with Amstrad CPC and MSX - Z80 was just too slow for their higher graphic modes.

Link to comment
Share on other sites

..

The Spectrum basically has a monochrome 256x192 video layout with an added layer of color attributes.

Each byte of the screen buffer is 8 pixels and the color attributes determine what the foreground and background colors are for that byte.

The color attributes control blocks of 8 bytes stacked on top of each other (if I remember right) so you end up with a very blocky looking screen unless you change the attributes on the fly. With only 16 (sometimes less than desirable) colors you get... well you get something that looks a bit like a coloring book where someone didn't stay inside the lines and the artist didn't get the big box of crayons. :)...

Changing attributes on the fly ? Don't know is it ever used - maybe in some demos. Despite all said, and what is true, some games look really good. Especially Popeye and couple other with large sprites :-)

Right description is that 1 color attribute (1 byte) controls background and foreground colors of 8x8px block. 3-3 bits define colors, so 8 colors max. 1 bit is brigthness for both back and foreground, and 1 is for flash (not really useful).

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...