Jump to content
IGNORED

Assembler inc instruction a bit confusion.


Recommended Posts

Im a bit confused about the inc (increase memory by one) instruction. Is it self contained or must it always be used with a adress or variable.

 

Heres a example lets say i want to use the accumulator as a counter to trigger a event when the counter reaches 4, How is it normally done, I being doing this many times with the x and y register but the accumulator is a bit different i think.

 

Example.

 

LDA #0

Counter

 

INC

CMP #4

BNE COUNTER

 

Is that even possible or do you always have to have variable to the accumulator somtehing like this :

 

30700=CTR

 

LDA #0

STA CTR

 

Counter

INC CTR

STA CTR

CMP #4

BNE Counter

 

Or if this is wrong can you give me a example on how to do this correctly ? Thank You.

 

Link to comment
Share on other sites

There's no accumulator-mode INC instruction: to bump the accumulator by one you must use CLC/ADC #1. It's normally more convenient to use the X or Y index register for this reason.

"INC CTR" also modifies the address directly, so there's no reason to use "STA CTR" afterwards.

If the counter isn't used as an index onto specific data, it's quicker to count down:

	lda #4
	sta ctr
Loop
	dec ctr
	bne Loop

Using the X register to do the same thing:

	ldx #4
Loop
	dex
	bne Loop

Link to comment
Share on other sites

This fact irritates me almost daily. Why in the world did they not make an INA, to go with INX and INY?

When Chuck Peddle designed the 6500 series, he didn't think it would be considered powerful enough to be used for personal computers. He was mainly trying to make it economically feasible to use CPUs in low-cost devices. So, Chuck's team did everything they could reasonably do within a self-imposed size and cost limit. Had Chuck known the chip would be so successful in the computer market, he probably would have done a few things differently but as it is the 6502 changed people's thinking about what features were really necessary and was partially responsible for the wave of RISC designs that would follow.

 

They did decide to add a ROR instruction after the first run, so that was nice.

Link to comment
Share on other sites

ADC #$01 is worse than INC because it occupies 1 byte more, but otherwise it takes 2 clock cycles just like INC in 65C02 and 65C816. So it is not that bad as long a you keep CLC outside the loop:

 

LDA #$00

CLC

LP ADC #$01

CMP #$04

BCC LP

 

PS. One obviously has to keep in mind, if that matters, that ADC modifies flags NVZC, unlike INC/INX/INY, which only modify NZ.

Edited by drac030
  • Like 1
Link to comment
Share on other sites

When Chuck Peddle designed the 6500 series, he didn't think it would be considered powerful enough to be used for personal computers. He was mainly trying to make it economically feasible to use CPUs in low-cost devices. So, Chuck's team did everything they could reasonably do within a self-imposed size and cost limit. Had Chuck known the chip would be so successful in the computer market, he probably would have done a few things differently but as it is the 6502 changed people's thinking about what features were really necessary and was partially responsible for the wave of RISC designs that would follow.

 

They did decide to add a ROR instruction after the first run, so that was nice.

Pretty much every feature that went into RISC was done before the 6502 including simplifying the instruction set.

I believe one of the PDP machines used a simplified instruction set to make it cheaper.

So I don't think the 6502 was responsible in any way and RISC would have happened even if the 6502 had never existed.

At least I didn't find any reference to the 6502 having any influence in the computer architecture book one of the original RISC designers wrote.

 

Link to comment
Share on other sites

I can't cite it, but I know I'd read somewhere that the 6502 was a big influence on RISC architectures. A simple instruction set and a ton of registers ( using page 0 as 'registers', that is )

Computer Architecture, A Quantitative Approach by Hennessy & Patterson made no mention of the 6502 having influenced the design in the edition I had and these guys pretty much created the first official RISC machines. Maybe the current edition or some of the papers they published mention the 6502 but I read a couple of the articles and didn't see it.

Besides, even the Wiki which doesn't have the full list of features the authors list as what RISC is in that book, lists earlier machines with RISC features and that predates the 6502 by over a decade.

 

The 6800 has direct page addressing and it predates the 6502 so claiming those are "registers" does nothing to support the argument for the 6502 influencing RISC. If anything, the complex addressing modes the 6502 added make it less RISC like since RISC originally only had simple addressing modes.

FWIW, modern RISC chips have rather large instruction sets and the only thing the 6502 had in common with RISC has proven to be the only thing it has dumped with age.

 

If you do find a 6502 reference from one of these guys I'd love to read it. It would be interesting to see how much the 6502 influenced them.

 

*edit*

I do seem to remember the ARM crew following some 6502 opcode naming conventions so maybe that was the influence you read about.

Edited by JamesD
Link to comment
Share on other sites

Computer Architecture, A Quantitative Approach by Hennessy & Patterson made no mention of the 6502 having influenced the design in the edition I had and these guys pretty much created the first official RISC machines. Maybe the current edition or some of the papers they published mention the 6502 but I read a couple of the articles and didn't see it.

Besides, even the Wiki which doesn't have the full list of features the authors list as what RISC is in that book, lists earlier machines with RISC features and that predates the 6502 by over a decade.

 

The 6800 has direct page addressing and it predates the 6502 so claiming those are "registers" does nothing to support the argument for the 6502 influencing RISC. If anything, the complex addressing modes the 6502 added make it less RISC like since RISC originally only had simple addressing modes.

FWIW, modern RISC chips have rather large instruction sets and the only thing the 6502 had in common with RISC has proven to be the only thing it has dumped with age.

 

If you do find a 6502 reference from one of these guys I'd love to read it. It would be interesting to see how much the 6502 influenced them.

 

*edit*

I do seem to remember the ARM crew following some 6502 opcode naming conventions so maybe that was the influence you read about.

 

I dunno, what you are saying seems pretty definitive. I may have read it about that idea here, for all I know.

Link to comment
Share on other sites

Reduced really refers to a reduction in instruction complexity rather than the number of instructions (which might reasonably need to be larger to compensate for their simplicity), I think.

 

Right, a smaller set of extremely simple instructions, and lots of registers. Part of the idea was that huge, complex opcodes were burning up large areas of silicon, and they couldn't be optimized in the same way that a smart human could do with a more unrolled instruction set, not to mention all that silicon had a cost. I think the advent of pipelining and really sophisticated scheduling algorithms, not to mention lowered production costs as the technology matured, kind of got CISC over the hump, and RISC fell out of favor.

Edited by danwinslow
Link to comment
Share on other sites

How do "large" and "reduced" square?

 

Look at the section titled 'Instruction Set' on this page:

https://en.wikipedia.org/wiki/Reduced_instruction_set_computing

 

 

It's more a question of being easy to decode and execute instructions so it takes fewer logic gates and it can run at a high clock speed with a throughput.

The real difference in simplicity shows when you start comparing transistor counts.

An 80286 has over 130,000 transistors, but an ARM 6 has around 35,000 even though ARM 6 is about 10 years newer.

A Pentium from about the same time as the ARM 6 has 3,100,000 transistors and didn't offer half as high of clock speed as ARM 6 derived processors such as the StrongARM.

A Pentium 4 "Prescot" has 125,000,000 transistors. I couldn't find a transistor count for the new ARM Cortex-72 but I'm sure it's still a fraction of that.

 

And just as important, it's easier to optimize and generate code for a RISC CPU than for an x86 CPU.

*edit*

That was really important when the chips were introduced since compiler technology was much more limited back then.

 

Edited by JamesD
Link to comment
Share on other sites

I thought RISC stood for ®educed (I)nstruction (S)et ©omputing. I would take that to mean it is the instruction set itself that is reduced, and not a commentary on the formulation of the instructions. I'm just going by what it says. ;)

Well, it does though I believe C was for Chip. What you have to understand is that the designers were comparing their design to mainframe/mini-computer CPUs so it's a bit of a relative term and was more about a catchy acronym than reality.

 

 

Right, a smaller set of extremely simple instructions, and lots of registers. Part of the idea was that huge, complex opcodes were burning up large areas of silicon, and they couldn't be optimized in the same way that a smart human could do with a more unrolled instruction set, not to mention all that silicon had a cost. I think the advent of pipelining and really sophisticated scheduling algorithms, not to mention lowered production costs as the technology matured, kind of got CISC over the hump, and RISC fell out of favor.

When all operations are register oriented (add, subtract, multiply, divide, bit shift, etc...) it greatly simplifies the buss interface and pipelined architecture.

The reduced addressing modes for the load and store instructions offers similar benefits.

When you add 32 registers you eliminate a lot of the register shuffle that takes place in architectures like the x86 so the greater number of instructions required due to their simplicity is largely offset.

Memory efficiency was not the strong point of RISC at first, kinda like the 6502. Newer instructions have helped a lot with this since then though.

 

Once superscaler technology was introduced, CISC instructions could be executed in a single clock cycle.

Part of the reason the intel chips dominated the desktop is the huge budged they had available due to the size of their market.

RISC chip manufacturers couldn't keep up. But then ARM sold over 12 billion cores last year so they have a little money to throw at R&D these days.

Other RISC architectures like MIPS and SPARK haven't been as successful because they failed to target the embedded market.

 

Link to comment
Share on other sites

ROR was intended from the start but it was broken in the early run so they deleted it from the manual.

http://www.pagetable.com/?p=406

When Chuck Peddle designed the 6500 series, he didn't think it would be considered powerful enough to be used for personal computers. He was mainly trying to make it economically feasible to use CPUs in low-cost devices. So, Chuck's team did everything they could reasonably do within a self-imposed size and cost limit. Had Chuck known the chip would be so successful in the computer market, he probably would have done a few things differently but as it is the 6502 changed people's thinking about what features were really necessary and was partially responsible for the wave of RISC designs that would follow.

 

They did decide to add a ROR instruction after the first run, so that was nice.

 

Link to comment
Share on other sites

I do remember pretty distinctly that C was for computing rather than for chip. If literature says chip now, I would say it is a backronym like DVD went from digital video disc to digital versatile disc. But anyway, thanks all for the extra info regarding the subject. It's interesting. I can't help but think of CISC as a "high level" version of machine language compared to RISC lower level version. And by "higher level" I mean like in programming languages, not as implying better (or worse).

Link to comment
Share on other sites

When Chuck Peddle designed the 6500 series, he didn't think it would be considered powerful enough to be used for personal computers. He was mainly trying to make it economically feasible to use CPUs in low-cost devices. So, Chuck's team did everything they could reasonably do within a self-imposed size and cost limit. Had Chuck known the chip would be so successful in the computer market, he probably would have done a few things differently but as it is the 6502 changed people's thinking about what features were really necessary and was partially responsible for the wave of RISC designs that would follow.

 

They did decide to add a ROR instruction after the first run, so that was nice.

FWIW, none of the 8 bit CPUs were in any commercial general purpose "Personal Computers" until 1975.

BYTE supposedly used the term for the Sphere 1 though I'm not sure if the term was applied to it at the time or retroactively.

Before that there was the Micral N that used the intel 8008 but it was pretty obscure and I'm sure nobody saw it as a Personal Computer.

Since personal computers didn't really exist yet, I doubt the 6502's designers thought about it in that way at all.

Chuck and the team that developed the 6502 would have been thinking about mini-computers, in which case they were right, the 6502 isn't that powerful.

They were mostly looking to compete with the 6800 which was used as an embedded CPU exclusively prior to the Sphere 1.

If personal computers had hit the market a year or two sooner that might have changed the goals for the 6502 completely.

 

As for RISC, if a couple papers by some university professors had hit a little sooner, the 6502 may very well have been more RISC like.

As it is, it does use a prefetch which is the first pipeline stage, it doesn't use microcoded instructions, and the simplified instruction set was aimed at reducing the die size.

It did use several of the same ideas, just not the complete package.

They must have been somewhat familiar with computer architecture discussions that had been going on since the 60's when it was designed.

I'd definitely say there was some common influence for the 6502 and RISC.

The prefetch alone was a pretty significant feature for an 8 bit of the time.

The only other 8 bit CPUs to use a prefetch I'm aware of before around 1990 were the Hitachi 64180 (which became the Z180) and 6309, both of which came around a decade later than the 6502.

The prefetch is responsible for a lot of their performance increase over the Z80 and 6809 which those are based on.

 

Link to comment
Share on other sites

Just checked Wikipedia for RISC. FJC was on the money. Here is a quote from the article:

 

 

Instruction set

A common misunderstanding of the phrase "reduced instruction set computer" is the mistaken idea that instructions are simply eliminated, resulting in a smaller set of instructions.[20] In fact, over the years, RISC instruction sets have grown in size, and today many of them have a larger set of instructions than many CISC CPUs.[21][22] Some RISC processors such as the PowerPC have instruction sets as large as the CISC IBM System/370, for example; conversely, the DEC PDP-8—clearly a CISC CPU because many of its instructions involve multiple memory accesses—has only 8 basic instructions and a few extended instructions.

The term "reduced" in that phrase was intended to describe the fact that the amount of work any single instruction accomplishes is reduced—at most a single data memory cycle—compared to the "complex instructions" of CISC CPUs that may require dozens of data memory cycles in order to execute a single instruction.[23] In particular, RISC processors typically have separate instructions for I/O and data processing.[citation needed]

Edited by fujidude
Link to comment
Share on other sites

I do remember pretty distinctly that C was for computing rather than for chip. If literature says chip now, I would say it is a backronym like DVD went from digital video disc to digital versatile disc. But anyway, thanks all for the extra info regarding the subject. It's interesting. I can't help but think of CISC as a "high level" version of machine language compared to RISC lower level version. And by "higher level" I mean like in programming languages, not as implying better (or worse).

The Wiki does say Computing so you must be right.

 

Edited by JamesD
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...