Jump to content

SeaGtGruff

Members
  • Posts

    5,591
  • Joined

  • Last visited

  • Days Won

    2

Everything posted by SeaGtGruff

  1. Yes, it would keep moving the sprite by the same amount unless you set the movement register to a different value. You can keep moving the sprite forever that way, as long as you keep strobing HMOVE. So if you want to move the sprite, say, 70 pixels to the right, you can strobe HMOVE for 10 lines, then stop.
  2. Not sure why no one ever answered, as it's well known. This was done by strobing the HMOVE register on every scan line. Normally, when you strobe HMOVE to shift the players, missiles, and ball to the left or right by whatever values you've indicated for each of them, you get a black line 8 pixels long at the beginning of that scan line. In games where HMOVE was strobed only on specific lines because of the way the game screen was divided up into horizontal bands or zones, you can see a "comb" of these black lines along the left edge of the screen. What Activision decided to do was strobe HMOVE on every scan line so this "comb" effect wasn't apparent, since those black lines are on every scan line instead of on only some of the scan lines.
  3. Actually, the asterisks are not where the signal comes back, but where it comes out of the whatsit-- that is, using my fictitious cell labels from above, the asterisk is not where G2 is coming back from P1, but rather is where the signal comes out of H2. But otherwise it's as I described-- the asterisks always occur at a place where the signal coming out of a circuit can have an unpredictable state upon the initial powerup, given that one or more of the signals that feed into that circuit, or which are used to latch some signal going into the circuit (if I said that right), are actually coming back from further down the overall signal path.
  4. I know this is a very late reply, but I saw this thresd and wanted to add a little to it since I spent some time several years ago trying to understand the TIA schematics and constructing an Excel spreadsheet to simulate the circuitry. Those asterisks are-- I believe-- the places where you have a signal feeding backward (or looping back) into a circuit, such that the state of the whatsit will be indeterminate or random when you first start feeding power into the TIA. That might be incorrect, but the reason I believe that that's what it means is because you see those asterisks throughout the schematics and they always occur at places where a signal from further down the line is being brought back to form a loop. When I was making my spreadsheet, each row represented a moment in time, and each column represented a transistor or gate or other thingy, beginning with the oscillator signal coming into the TIA, such that the value of cell A1 basically determined the values of the other cells on that row according to the logic associated with each column-- that is, the signal flow was A1, B1, C1, D1, E1, etc. My plan was to have each row start with a different or alternating oscillator value-- that is, A1 might be 0, then A2 would be 1, then A3 would be 0 again, etc. I actually decided to randomize the value of A1 so it could be either 0 or 1 to represent the uncertainty of what state A1 would be in at the moment the TIA powered up and started receiving the oscillator's signal. Most of the flow is in one direction, so it was easy to construct the logic for each cell. But as soon as I reached a point where a signal is being looped back I was faced with a quandry. How could I construct the logic for, say, cell H1 if one of its inputs is a line coming back from, say, cell P1, given that I haven't even coded the logic for cell P1 yet and determined its value? What I ended up doing was add a cell in front of H1 to represent the value of P1 from further along the signal flow, but actually coming from the row above, since its state had to have been determined during a prior moment in time-- that is, G2 was equal to P1, then H2 would be determined by whatever logic represented that particular circuit, then eventually I could determine what P2 was equal to and feed its value back into G3. The problem is, what to do about the very first row or moment in time? I decided to code the logic for G1 such that if we were on row 1 then we would randomly set G1 to a 0 or 1 given that we had no idea yet what its actual state should be, but if we were on any row other than row 1 then the value of G2 would be pulled from cell P1. As a result, I ended up with two rows for each state of the oscillator-- that is, rows 1 and 2 might begin with a value of 1, then rows 3 and 4 would be where the oscillator changed to a value of 0, etc. Thus, the odd-numbered rows represented the initial signal flow when the oscillator changes from 0 to 1 or vice versa, and the even-numbered rows represented the signal flow after any looped-back signals had changed their states and everything had settled down. When I did that, I noticed that everywhere I was randomizing the initial state of a signal because it was being looped back from further along the path and its value wasn't known yet, that was where those asterisks appeared in the schematics. I might be wrong about what they mean, but I genuinely believe they were put there to call attention to the places where random high/low states occur at the initial moment of powering up the TIA and the oscillator.
  5. I understand what you're saying, but I still think that a computer program-- and an assembler is a computer program-- can be programmed to be "intelligent" enough to recognize that "$0013" is a 4-"digit" hex address, and as such it should be able to interpret it as a 2-byte address even though the high byte is 0. Now, if the person using the assembler prefers it to default to one behavior or another-- such as automatically assuming that "$0013" is to be treated as a 2-byte address, or automatically simplifying it to a 1-byte zero-page address-- then a command line switch or .INI file parameter or whatever could be added for that purpose. Having to append extra characters to an opcode seems like a kludge. I'm familiar with that method, even if I haven't written or assembled any 6502 code in several years now, and I understand that it can be useful for forcing the assembler to use a particular address mode-- but in this particular case it seems like it shouldn't be necessary in the first place, because you aren't using it to force the assembler to use a different address mode than otherwise written, you're using it to force the assembler to use the address mode as written.
  6. My personal feeling is that that's a problem with the assembler, because to my mind it should treat a 4-character hex address as absolute, even if the high byte is $00.
  7. I didn't bother to look up the correct address for RESM1, so I might have given the wrong one. But what I meant was that if the zero-page address were $nn, then the absolute address would be written in assembly as $00nn, not $nn00-- even though it would be expressed in Little Endian order in the actual machine code.
  8. I remember that I made and posted a chart showing when you could write to the PF registers, breaking it down to each playfield pixel (based on some calculations), and I think you (@Thomas Jentzsch) gave me feedback about timing variations for certain models/clones, which I added to the chart. I think I posted two versions, one showing the calculations and another that just gave the results. It should be here in one of the 2600 programming subforums. But I'm a little off-topic, since this thread is actually about HMOVE.
  9. As far as I remember, there are indeed some variations between specific models and clones of the VCS 2600 with regard to things like the timing of HMOVEs and the windows for when it's safe to write to the playfield registers. You might be able to find more information about the differences by searching the 2600 programming subforums for something like "2600jr HMOVE," "2600jr starfield effect," "2600jr playfield pixels," etc.
  10. If that's supposed to be the absolute version of a zero-page address, shouldn't it be $0013? Even though 6502 assembly is Little Endian, that doesn't change the way you write an absolute address.
  11. … he said while posting on a forum devoted to old gaming consoles. Yes, I admit it-- I'm late to the party. And if ESO offers "basically the exact same experience" as "Pretty much every MMO," it's still new to me, since I've never played any of them. I also never watched Seinfeld or Friends until long after they ended and were living on in Syndication Perpetuity. But getting back to ESO, it seems pretty absorbing and fun to me. I'm more into playing it solo than trying to make a bunch of friends to chat with, or group up with some random strangers to tackle a dungeon or boss that's deliberately designed to be too much for a single player to handle. On the other hand, I've occasionally helped other players out by crafting them some gear, or by teaming up to tackle a boss for a daily quest. It's just that I'd rather not have to pay attention to the chat window when I'd rather be exploring some location I haven't been to before, or seeing if I've improved enough to successfully beat a group dungeon or group boss on my own. And there are definitely some things that I don't care for, such as the obsession that all of the other players seem to have with leveling up as quickly as possible, or with creating the "best build," or charging other players outrageous prices for items which the game designers have deliberately programmed to be super-rare, etc. So it isn't all peaches and cream. But all in all, I'm having a lot of fun with it.
  12. In a way, TES games are designed such that you can't "finish" them per se, given that after you complete the main quest and all side/guild/other quests, you can continue to (re)explore dungeons and caves, or even pursue a life of sorts within the game's environment, such as collecting alchemy ingredients to sell to merchants, or crafting potions or other things to sell to merchants, or just wandering around town chatting with the locals even though they always talk about the same things. I kept telling myself for years that I'd never buy ESO, because I had a very dim view of online games-- not based on any personal experience, since I'd never played an online game before, ever; but rather based on things I'd read about online gaming from people who played online games. But about 10 months ago I finally bought ESO:Morrowind and all(? or most) DLC available at that time because it was on sale and I was trying to complete my collection of ES games. (I also bought the Skyrim Special Edition at the same time so I'd have all available official add-ons for it, too.) And I've basically been playing ESO ever since, to the exclusion of just about everything else. I primarily play ESO as though it were an offline game, meaning I try to solo as much of it as I can. But I also interact with other players from time to time, such as to craft them a weapon or set of armor, or when I need to join a group to go after a group boss or clear a group dungeon-- except that I've gotten to where I can tackle many of the group bosses by myself, and the other night I cleared a group dungeon (Volenfell) by myself (in normal mode). Anyway, I've really been enjoying the heck out of ESO. Getting back to the topic at hand, I saw a YouTube video from a guy who analyzed that brief little trailer for TES6 and concluded that it's going to be set in Hammerfell-- or at least that the mountains, coastline, and cities we see in the trailer are in Hammerfell.
  13. Semitones_From_A4 = 12 * LOG ( Frequency / 440, 2 ) Cents_From_Note = 100 * ( Semitones_From_A4 - ROUND ( Semitones_From_A4, 0 ) ) Edit: I'm working on posting a spreadsheet I made that simulates how the TIA generates its audio signal as shown in the schematics. I'm not going to explain it in detail, but I'll post a copy of the schematics that shows how the columns of the spreadsheet relate to the symbols on the schematics, along with some general comments about how to "use" the spreadsheet. I'll also post some other stuff related to TIA audio. This will likely be done in a series of posts, rather than all at once, and I'm going to be extremely busy with family for the next week, so it might take a little while.
  14. It's been a while since I've looked at this subject, so I hope I've gotten all of the details correct! You can use the following formula to calculate the frequency of a particular note generated by the TIA: NoteFrequency = OscillatorRate / ColorClocks / WaveformLength / NoteValue OscillatorRate is the frequency of the crystal oscillator, which determines the number of color clocks (a.k.a. pixel clocks) per second. This depends on the type of 2600: NTSC 2600s: OscillatorRate = 3579575 Hz PAL or SECAM 2600s: OscillatorRate = 3546894 Hz ColorClocks is the number of color clocks per audio clock. The TIA generates 2 audio clocks per scan line, and there are 228 color clocks per scan line, so there are 114 color clocks per audio clock (228 / 2). This is really an average value, because one audio clock is 112 color clocks long and the other is 116 color clocks long, due to the way they're generated by the HSC (Horizontal Sync Counter). The horizontal clocks that drive the HSC are 4 color clocks long, so there are 57 horizontal clocks per scan line (228 / 4). We can't evenly divide 57 by 2-- the closest we can get to an even split is 28 and 29-- but it's just a coincidence that it worked out that well for the audio clocks. The HSC generates a number of events (or signals) related to the scan line and the playfield, and some of these are used to trigger the phases of the audio clocks. As luck would have it, two such events are separated by 28 horizontal clocks (or 112 color clocks, 28 * 4) in one direction and 29 horizontal clocks (or 116 color clocks, 29 * 4) in the other direction. In any case, we say that there are (on average) 114 color clocks per audio clock. WaveformLength is the length of the waveform generated by the value written to the AUDC0 or AUDC1 register. (We'll ignore audio channel 1 for the rest of this discussion and focus on audio channel 0.) There are 16 possible values for AUDC0 (0 through 15), but some generate waveforms which are duplicates of other AUDC0 values, so there are really only 11 unique waveforms. The lengths of the waveforms are given as the number of samples in one complete cycle, which are as follows (sorted in ascending order by length): AUDC0 = 0 or 11: waveform length = 1 ("always on") AUDC0 = 4 or 5: waveform length = 2 AUDC0 = 12 or 13: waveform length = 6 AUDC0 = 1: waveform length = 15 AUDC0 = 6 or 10: waveform length = 31 AUDC0 = 7 or 9: waveform length = 31 (different than AUDC0 = 6) AUDC0 = 14: waveform length = 93 AUDC0 = 15: waveform length = 93 (different than AUDC0 = 14) AUDC0 = 2: waveform length = 465 AUDC0 = 3: waveform length = 465 (different than AUDC0 = 2) AUDC0 = 8: waveform length = 511 NoteValue is the value written to the AUDF0 register, plus 1. There are 32 possible values (0 through 31), but we need to add 1 to them for our formula. To get a little bit technical again, the TIA uses tone clocks to control the rate at which a waveform is played. Each tone clock is generated by the AFD (Audio Frequency Divider) by suppressing some number of audio clock signals. The value of AUDF0 specifies how many sets of phase 1 and phase 2 audio clock signals to suppress. If AUDF0 = 0, no signals are suppressed, so 1 tone clock = 1 audio clock; if AUDF0 = 1, one set of signals is suppressed, so 1 tone clock = 2 audio clocks; if AUDF0 = 2, two sets are suppressed, so 1 tone clock = 3 audio clocks; etc. This has the effect of stretching the waveform to a multiple of its normal length, thereby dividing the waveform's normal or "base" frequency and producing a lower pitch. So if you set AUDF0 to 25 on the NTSC 2600, the various values of AUDC0 should produce the following frequencies (rounded to 5 decimal places): AUDC0 = 0 or 11: NoteFrequency = 3579575 / 114 / 1 / 26 = 1207.68387 Hz (but the samples are "always on" so you get no sound) AUDC0 = 4 or 5: NoteFrequency = 3579575 / 114 / 2 / 26 = 603.84194 Hz AUDC0 = 12 or 13: NoteFrequency = 3579575 / 114 / 6 / 26 = 201.28065 Hz AUDC0 = 1: NoteFrequency = 3579575 / 114 / 15 / 26 = 80.51226 Hz AUDC0 = 6 or 10: NoteFrequency = 3579575 / 114 / 31 / 26 = 38.95754 Hz AUDC0 = 7 or 9: NoteFrequency = 3579575 / 114 / 31 / 26 = 38.95754 Hz AUDC0 = 14: NoteFrequency = 3579575 / 114 / 93 / 26 = 12.98585 Hz AUDC0 = 15: NoteFrequency = 3579575 / 114 / 93 / 26 = 12.98585 Hz AUDC0 = 2: NoteFrequency = 3579575 / 114 / 465 / 26 = 2.59717 Hz AUDC0 = 3: NoteFrequency = 3579575 / 114 / 465 / 26 = 2.59717 Hz AUDC0 = 8: NoteFrequency = 3579575 / 114 / 511 / 26 = 2.36337 Hz To make it easier for you to compare these with the values in your second post, here they are in ascending AUDC0 order: AUDC0 = 0: NoteFrequency = 1207.68387 Hz (but it's actually silent) AUDC0 = 1: NoteFrequency = 80.51226 Hz AUDC0 = 2: NoteFrequency = 2.59717 Hz AUDC0 = 3: NoteFrequency = 2.59717 Hz AUDC0 = 4: NoteFrequency = 603.84194 Hz AUDC0 = 5: NoteFrequency = 603.84194 Hz AUDC0 = 6: NoteFrequency = 38.95754 Hz AUDC0 = 7: NoteFrequency = 38.95754 Hz AUDC0 = 8: NoteFrequency = 2.36337 Hz AUDC0 = 9: NoteFrequency = 38.95754 Hz AUDC0 = 10: NoteFrequency = 38.95754 Hz AUDC0 = 11: NoteFrequency = 1207.68387 Hz (but it's actually silent) AUDC0 = 12: NoteFrequency = 201.28065 Hz AUDC0 = 13: NoteFrequency = 201.28065 Hz AUDC0 = 14: NoteFrequency = 12.98585 Hz AUDC0 = 15: NoteFrequency = 12.98585 Hz Some of these are sort of close to the values you got, but some are way off, which I'll explain below. As for the ones that are sort of close, I'm not sure I understand what you did-- did you sample the actual output of the NTSC 2600, or did you write a program to generate your own samples? In any case, 31400 Hz is just an approximation of 3579575 / 114, so any calculations that use 31400 Hz are going to be a little off. But what about the values that are way off? It turns out that the frequencies calculated with our formula don't necessarily reflect what you actually hear when the notes are played. This is due to each waveform's pattern. If the waveform has a single high phase and a single low phase, we say that it produces a "pure tone." This isn't entirely true, because the waveforms aren't sine waves and often their high and low phases aren't equal in length; but for the most part any harmonic overtones are quiet enough that these "pure tone" waveforms do sound a lot like sine waves. However, if a given waveform has more than one set of high and low phases it will produce noticeable harmonic overtones, hence what we hear will generally sound like some multiple of the calculated frequency. I'll go into this later in another post.
  15. I certainly agree that the terms "vertical front porch," "vertical back porch," "horizontal front porch," and "horizontal back porch" are awkward to say and write, but they aren't my inventions-- they were coined by television engineers before I was ever born and have been in use for many decades. I've never seen the terms "top vertical blank" and "bottom vertical blank" used until your post-- or for that matter "left horizontal blank" and "right horizontal blank." They might be correct in a descriptive sense, but I wouldn't call them "technically correct" because that implies they're correct according to technical definitions. Anyway, the phrase "technically correct" is often used before giving a dissenting view-- e.g., "The terms 'vertical front porch' and 'vertical back porch' are technically correct, but they're awkward to use and their meanings aren't intuitively obvious, so I prefer to use the terms 'bottom vertical blank' and 'top vertical blank,' respectively, because they're more descriptive and hence more intuitively obvious, in addition to being less awkward-sounding."
  16. "Vertical blank" or "vblank" (take your pick) isn't technically correct the way it's typically used by the Atari 2600 programming community, because technically it's the entire blanking period of the vertical cycle-- i.e., the vertical front porch (which Atarians call "overscan"), the vertical sync, and the vertical back porch (which Atarians call "vblank" or "vertical blank").
  17. At this point it's probably pointless to try to change the way the Atari 2600 programming community uses the term (also "vblank")-- it might just confuse people-- but some sort of footnote or "Did You Know?" sidebar might not be a bad idea, especially if any people who are already familiar with the terms from video technology decide to learn Atari 2600 programming. That's not as far-fetched as it might seem, because those terms are often used in connection with computer monitors, so it's not like you'd have to have been an engineer at some TV company to have ever heard of them before.
  18. That description of overscan is incorrect as far as video technology is concerned. That was always one of my pet peeves about Atari 2600 programming. If you do a web search for "overscan underscan" you can read about what overscan actually is. What Atari 2600 programmers refer to as the "overscan" is actually the "vertical front porch." (Hey, don't look at me, I didn't invent these terms. It isn't my fault that engineers feel a burning inner need to coin highly technical terms like "front porch," "back porch," and "breezeway"!)
  19. If you're going to talk about page boundaries-- which I think is a great idea-- then you should try to explain why programmers need to know about them and pay attention to them. One importance is that when an indexed instruction or branch instruction crosses a page boundary, it adds 1 machine cycle to the instruction's execution time, which can interfere with the careful timing of a scan line display or other routine. Sometimes this could be used to advantage, such as if for some reason you need the instruction to take an extra cycle during a particular iteration of a loop. However, that is probably so rare and so difficult to manage (what with revising the program and so forth) that it's probably best to forget about entertaining such thoughts. ("This way lies madness!") More often you either try to position your data tables in memory such that they can be read via an indexed instruction without ever having to cross a page boundary, or-- if for some reason you can't or don't want to do that-- be very careful to compensate for the fact that the instruction will sometimes take an extra cycle. I seem to remember dealing with page-boundary crossings in my old "bitmap display" program for the 2600, although my memory is very hazy about that. Another important fact related to page boundaries is that the 6502 processor and its kin (6507, etc.) had a famous bug in which an indirect JMP would end up jumping to the wrong address if the address in the instruction crossed a page boundary-- e.g., JMP ($10FF) is supposed to load the address that's stored at $10FF (lo byte) and $1100 (hi byte), but due to the bug it would actually read the hi byte of the JMP address from $1000 (i.e., in the same page as the lo byte of the address), with the result that the JMP jumps to the wrong place. To avoid this bug, 2-byte JMP addresses should always begin on even-numbered bytes so they don't cross a page boundary. EDIT: I see RevEng already mentioned the extra instruction cycle; sorry for the repetition!
  20. The following post rambles a bit, and is based on my own understanding, interpretations, and opinions-- which might not agree with other people's-- so feel free to pick and choose as much or as little of it as you wish, or to go with any dissenting comments. In the most general sense, a "cycle" is a series of events which keep recurring in a particular sequence, with each event generally taking a particular amount of time, and with the overall cycle also generally taking a particular amount of time. Sometimes the individual events as well as the overall cycle might vary a bit in duration from one repetition to the next, so usually the average duration is given for each event as well as for the overall cycle. Thus, day and night form a cycle, the phases of the Moon form a cycle, the seasons of the year form a cycle, etc. In the computer world, a cycle is commonly understood to be the sequence of alternating high and low phases of the CPU-- i.e., a machine cycle or CPU cycle. The durations of each phase and of the overall cycle are usually given in terms of either milliseconds (thousandths of a second) or microseconds (millionths of a second), depending on how small they are. The frequency or rate of a cycle-- i.e., how many times it recurs during a specific unit of time-- is the inverse of its duration or period, and vice versa. For instance, if the cycle recurs 100 times a minute, then its frequency or rate is 100 times a minute and its duration or period is a hundredth of a minute. Similarly, if we know that the cycle has a period of one-third of a second then we know that its frequency is 3 times a second, since 3 is the inverse of 1/3. If the cycle's duration is less than a second, its frequency is usually expressed in terms of hertz (Hz), or times-per-second-- e.g., 60 Hz means 60 times a second-- although if the frequency is high enough then it might be expressed in terms of some multiple of a hertz (KHz or kilohertz, MHz or megahertz, etc.). The term "clock" is sometimes used as being more or less synonymous with "cycle," but they can be a bit different depending on usage, as described below. When dealing with TV displays, there are other cycles of interest-- the line rate or horizontal frequency (how many scan lines are drawn per second), the field rate or vertical frequency (how many fields are drawn per second), the frame rate (which is different than the field rate if each full frame is made up of two interlaced fields), and the color frequency (how many colored pixels are drawn per second. For TV these can vary depending on the broadcast signal (NTSC, PAL, SECAM, etc.), as well as the capabilities and settings of the monitor, which might be able to display various image formats having different horizontal and vertical frequencies, progressive vs. interlaced display, etc., not to mention differences between the old analog signals vs. newer digital signals. When programming in assembly language for Atari game consoles or computers, in which the program needs to be sure to update the graphical elements (playfield pixels, sprite pixels, color registers, etc.) in time before the raster beams* reach a certain point on the screen, we're also very interested in "color clocks," usually called "pixel clocks" when talking about TVs or monitors. For the Atari machines, "color clock" is probably a better term, since different graphical elements may have pixels of different sizes, so "pixel clock" might be potentially confusing if we're talking about playfield pixels, sprite pixels of various widths (single, double, quadruple, or even octuplet for the ball and missiles), whereas "color clock" more clearly refers to how quickly the machine can draw dots of color on the screen. In this usage, "color clock" can mean both the amount of time it takes for the raster beams* to draw one dot of color, as well as the distance that the raster beams* travel during that time-- i.e., we think of the active portion of a scan line as being divided into a certain number of color clocks, such that we can express a given position on the screen in terms of a scan line number and a color clock number, or refer to a graphical element as being a certain number of color clocks in width. (*A monochrome or black-and-white monitor has only one raster beam, but a color CRT actually has three raster beams-- one each for the red, green, and blue phosphors-- although it's common to think of them as forming "one" raster beam since they move in sync with each other.) It's also common to see references to a "machine clock," which is the same thing as a machine cycle. I like to use "cycle" as a shortened way of saying "machine cycle," and "clock" as a shortened way of saying "color clock." The correspondence between machine cycles and color clocks varies from machine to machine-- e.g., on the Atari 2600 each machine cycle corresponds to 3 color clocks, whereas on the Atari 8-bit computers each machine cycle corresponds to 2 color clocks due to their faster CPU speed.
  21. This looks and sounds very interesting! Is there an option to specify whether you're targeting an NTSC console or a PAL/SECAM console? EDIT: Sorry if that was a dumb question, I didn't see the link until after I posted (too fast on the draw). I'll definitely be checking it out.
  22. I haven't done anything with bB in a couple of years-- my new computer doesn't even have it installed, and my old computer isn't handy right now-- so all of that from six years ago is kind of fuzzy in my mind! But I think that what I wrote probably applied to a standard or non-bankswitched kernel. Since I don't have access to a bB installation right now, I can't check whether the numbers are the same if you're using bankswitching. Anyway, if I remember correctly, when you're using bankswitching you generally need to keep your display-related data in the same bank as the drawscreen routine, because when the 2600 is performing the display kernel (drawing the screen) it needs to be able to read the data from ROM without switching banks as it's drawing. The exception is when the data is loaded into RAM, as is normally the case with the playfield graphics. However, that doesn't include the playfield colors, since the data resides in ROM and only the pointers are in RAM. Now, even if you've got the data in the wrong bank, that shouldn't make the code error out during compile (unless the compiler is "smart" about such things, which I don't think it was back when I was still using bB)-- I think the worst that should happen would be that the playfield colors aren't what you expected, since they're being read from the "correct" address but the bank they're in isn't selected, hence the kernel will be reading "garbage" data from where it thinks the playfield color table is. You should post the messages that you're getting, so we can read the full text. You don't need to post your code if you don't want to release it, but being able to see a screenshot of the exact message should help us.
  23. Does anyone else have trouble with the Delete key not working while typing a reply to a post? The only way I can delete anything to correct typos etc. is to use the Backspace key. I post in a number of other forums but don't have this problem in any of them. Is it just me, or is anyone else having this problem, too?
  24. You also need a black color, rather than just the blanking, because you can't turn blanking on and off fast enough to draw pixels with it. It takes a minimum of 3 CPU cycles to turn blanking on or off-- it could take longer if you need to load the A, X, or Y register first-- and each CPU cycle is 3 color clocks wide, so the smallest "pixel" you can draw this way (by turning blanking on, then immediately turning it back off) is 9 color clocks wide. There's also a limitation on where you can position these "pixels," because they can start only on every third color clock, since shifting the machine code by 1 CPU cycle corresponds to shifting the "blanking pixel" by 3 color clocks. Additionally, if you "draw" a background with the blanking this way, you can't move anything across it-- e.g., a spaceship flying through space-- because the raster beams can't draw the spaceship if they're turned off. And finally, if you "draw" anything with the blanking, you have to figure that into the timing and sequencing of everything that's going on in your scan line loop, because you can't just set some graphics register to a desired shape and color during the horizontal blanking and let the TIA automatically draw from that graphics register at the appropriate place in the scan line. So you can definitely "draw" with the blanking if you want to, but there are severe limitations.
  25. Yes, but what about ordinary TVs from the '60s and early '70s? If earlier TVs were more susceptible to screen burn and image persistence, and this became apparent once a large number of households started connecting the earliest video game units to their TV sets, then we would hope that by the mid-to-late '70s and the '80s that TV manufacturers would have taken steps to minimize the potential for that problem to occur. And it might be possible that some of the people who apparently had issues with Pong were still using black-and-white TVs. I don't remember what year we finally got a color TV, but I do remember that I'd been whining for one for a long time. I remember that a lot of shows were filmed and broadcast in black-and-white during my childhood, and color broadcasts weren't the norm until the second half of the '60s-- e.g., the first season of Lost in Space (1965-1966) was in black-and-white, and that wasn't too unusual at the time.
×
×
  • Create New...