Jump to content
IGNORED

Misuse of the word "Optimization"


Recommended Posts

Why is it that whenever I use the phrase "optimizing code," people always assume I'm refering to fine-tuning or nit-picking code? Fine-tuning or nit-picking barely speeds up code at all. Archectural optimizations and taking different approaches make far more of a difference, so why do small optimizations always come to people's minds whenever they hear the word "optimize?"

Edited by Multijointed Monster Maker
Link to comment
Share on other sites

Optimizing can be for more than just speed gains. I consider it "optimizing" when I go over my code to make execution flow more cleanly or logically, or otherwise just to make the code easier to read for anyone who works on it after me. Such optimization may not speed things up, but it will make the program a whole lot more maintainable.

 

Then again, you might surprised at how such tweaks really can speed things up sometimes. A little nip here and a little tuck there, especially in code that gets looped over a lot, can make big differences down the road.

Link to comment
Share on other sites

Because it has no strongly defined meaning in terms of computer science.

 

To me it means things that a human can do to speed up an algorithmic process that the compiler/assembler/linker itself cannot do. These things usually reduce the generality of the code in favor of speed by taking advantage of special cases or of methodology that the compiler doesn't know about.

Link to comment
Share on other sites

I also consider using creative use of DMA, V-RAM, fitting a sprite frame within the least amount of 8x8 tiles, and recycling tiles between frames, to be another type of optimization.

 

I'm writing my own game, and I'm planning on simulating sprite rotation by drawing out 1/4 the rotation steps in ROM, using an algorithm to make the cpu rotate each rotation frame by 90 degrees into RAM for another 1/4, and using x and y flips for the other 1/2. Symetrical sprites make it even easier to fake rotation.

Link to comment
Share on other sites

Optimization just means making better - and you're right - in these days of compiled languages, the compilers have gotten so good and handle many optimizations for you. If you're programming on ghetto-old computers, like many of us, that's not as true. Loop optimization is often best done by hand.

 

And, of course, there's optimization done to reduce memory footprints and code size. That's often still best done by hand.

Link to comment
Share on other sites

I've come to the realization that optimization as it applies to the 6502 is a battle for speed versus size. And sometimes both versus robustness.

 

I never use the word "optimizing" when I'm fine-tuning things, unless the fine-tuning itself is optimizing. I usually say "fine-tuning", "tweaking" or "polishing it off".

Edited by doppel
Link to comment
Share on other sites

  • 4 weeks later...

We always used to call speeding code up optimising in the 80s. On a 1-2mhz cpu when you're trying to make a game shove around as much stuff as possible every cycle, especially in a loop, can make a massive difference. At the time there wasn't a lot of code sharing so as long as code was well commented it didn't really matter how easy it was for other people to pick up. It's stuck with me ever since.

 

 

Pete

Link to comment
Share on other sites

  • 2 weeks later...

I can never understand why everyone says "premature optimization is the root of all evil?" It's like a million times easier to optimize your code early on than to wait till the end to fix it. The first sign of my game lagging I would unroll a loop or knock a few cycles off a macro. I'm not waiting till the end of development to solve such a simple problem that can be fixed within seconds.

Edited by Multijointed Monster Maker
Link to comment
Share on other sites

I can never understand why everyone says "premature optimization is the root of all evil?" It's like a million times easier to optimize your code early on than to wait till the end to fix it. The first sign of my game lagging I would unroll a loop or knock a few cycles off a macro. I'm not waiting till the end of development to solve such a simple problem that can be fixed within seconds.

Premature optimization refers to optimizing before you clearly understand where the bottlenecks are... Coding the first whack at a routine in some fast-but-less-maintainable way without knowing its a bottleneck would be one example.

 

In your example you've tested and realized that a bottleneck exists. That's not premature.

Link to comment
Share on other sites

I can never understand why everyone says "premature optimization is the root of all evil?" It's like a million times easier to optimize your code early on than to wait till the end to fix it. The first sign of my game lagging I would unroll a loop or knock a few cycles off a macro. I'm not waiting till the end of development to solve such a simple problem that can be fixed within seconds.

Premature optimization refers to optimizing before you clearly understand where the bottlenecks are... Coding the first whack at a routine in some fast-but-less-maintainable way without knowing its a bottleneck would be one example.

 

In your example you've tested and realized that a bottleneck exists. That's not premature.

 

Man, did the ROM hacking community get that one wrong!

Link to comment
Share on other sites

Man, did the ROM hacking community get that one wrong!

You think the ROM hacking community doesn't understand the bottlenecks involved before they optimize? That's a pretty dim view of them, I think.

 

Usually with ROM hacking, any performance issues are solved in the original game, and the hacker just needs to optimize to gain some additional space for the hack code.

 

Even with homebrewers, premature optimization is a mistake. Take the VCS for example, you could optimize the code that draws the scanline by running self-modifying code from ram. Afterward you may find you have unused cycles on the scanline, but desperately need the ram you threw at your optimization.

Link to comment
Share on other sites

Man, did the ROM hacking community get that one wrong!

You think the ROM hacking community doesn't understand the bottlenecks involved before they optimize? That's a pretty dim view of them, I think.

 

Usually with ROM hacking, any performance issues are solved in the original game, and the hacker just needs to optimize to gain some additional space for the hack code.

 

Even with homebrewers, premature optimization is a mistake. Take the VCS for example, you could optimize the code that draws the scanline by running self-modifying code from ram. Afterward you may find you have unused cycles on the scanline, but desperately need the ram you threw at your optimization.

 

You haven't met the Super Mario World hacking team. They took Super Mario World and butchered it with tons of slowdowns that weren't in the original.

Link to comment
Share on other sites

  • 4 months later...

I can never understand why everyone says "premature optimization is the root of all evil?" It's like a million times easier to optimize your code early on than to wait till the end to fix it. The first sign of my game lagging I would unroll a loop or knock a few cycles off a macro. I'm not waiting till the end of development to solve such a simple problem that can be fixed within seconds.

Premature optimization refers to optimizing before you clearly understand where the bottlenecks are... Coding the first whack at a routine in some fast-but-less-maintainable way without knowing its a bottleneck would be one example.

 

In your example you've tested and realized that a bottleneck exists. That's not premature.

 

That's only half of it. Perhaps you've perfected your high-speed, branchless unrolled mega-awesome bubble sort. It'll always be beaten by the most lazily coded (but correct) quicksort or heap sort, if you have a large enough data set.

 

Understanding that something's a bottleneck is only half the battle. Understanding why it's a bottleneck and if there are better algorithms is an important step.

 

As for the original premise of this thread: I use "optimize" in all contexts of improving a program. For example, I could write a game using the general purpose framework offered by the Intellivision EXEC. The EXEC is compact, fairly straight forward, and in many ways elegant. However, its architecture and overall structure are not well suited to many games: ie. it's not an optimal choice. Then there's the issue of implementation: They squeezed a lot in 4K of ROM, but they gave up speed in the process. So, from a speed perspective it's far from optimal, though it's much closer on size. The concept remains valid at both levels.

 

Optimize is a mathematical concept: There's some theoretical minimum (or maximum) you're trying to achieve, and a solution is optimal if you've achieved it. Optimizing is merely the process of moving toward that minimum or maximum.

 

Using the term "optimize" to refer to low-level fiddling is almost as old as computing itself. (While that article is dated 1983, it's describing events of the 50s or 60s.)

Edited by intvnut
Link to comment
Share on other sites

  • 4 weeks later...

My optimisation strategy is simple - my code gets thrown into 2 piles: the stuff that runs in the frameloop and the stuff that doesn't.

 

The frameloop code gets targeted for speed, often with a code/data size increase taking up more memory

the outside code gets treated for size at the expense of speed - after all if the level initialisation code takes 3 frames of solid processing instead of 2 when nothing is being drawn who really sees that extra 20ms?

 

Not a massively elegant way of doing it - but it seems to work

Edited by sack-c0s
Link to comment
Share on other sites

Even with homebrewers, premature optimization is a mistake. Take the VCS for example, you could optimize the code that draws the scanline by running self-modifying code from ram. Afterward you may find you have unused cycles on the scanline, but desperately need the ram you threw at your optimization.

Agreed.

 

Usually there are three steps involved in 2600 coding.

1. make it run somehow

2. look for bottlenecks and optimize code/data structure and algorithms(!) ("refactoring")

3. squeeze out the maximum performance by "peephole" code optimizations

 

Sometimes you mix 1 and 2 or 2 and 3. But you should never start early with 3. Usually step 2. is the most efficient one.

 

This goes for everything but the kernel. Here step 1 is usually skipped very fast and a lot of time is invested into steps 2. and especially 3.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...