Jump to content

SeaGtGruff

Members
  • Posts

    5,591
  • Joined

  • Last visited

  • Days Won

    2

SeaGtGruff last won the day on October 8 2013

SeaGtGruff had the most liked content!

3 Followers

About SeaGtGruff

  • Birthday 01/02/1959

Profile Information

  • Gender
    Male
  • Location
    South Carolina, USA

Recent Profile Visitors

30,394 profile views

SeaGtGruff's Achievements

Quadrunner

Quadrunner (9/9)

605

Reputation

  1. Yes, it would keep moving the sprite by the same amount unless you set the movement register to a different value. You can keep moving the sprite forever that way, as long as you keep strobing HMOVE. So if you want to move the sprite, say, 70 pixels to the right, you can strobe HMOVE for 10 lines, then stop.
  2. Not sure why no one ever answered, as it's well known. This was done by strobing the HMOVE register on every scan line. Normally, when you strobe HMOVE to shift the players, missiles, and ball to the left or right by whatever values you've indicated for each of them, you get a black line 8 pixels long at the beginning of that scan line. In games where HMOVE was strobed only on specific lines because of the way the game screen was divided up into horizontal bands or zones, you can see a "comb" of these black lines along the left edge of the screen. What Activision decided to do was strobe HMOVE on every scan line so this "comb" effect wasn't apparent, since those black lines are on every scan line instead of on only some of the scan lines.
  3. Actually, the asterisks are not where the signal comes back, but where it comes out of the whatsit-- that is, using my fictitious cell labels from above, the asterisk is not where G2 is coming back from P1, but rather is where the signal comes out of H2. But otherwise it's as I described-- the asterisks always occur at a place where the signal coming out of a circuit can have an unpredictable state upon the initial powerup, given that one or more of the signals that feed into that circuit, or which are used to latch some signal going into the circuit (if I said that right), are actually coming back from further down the overall signal path.
  4. I know this is a very late reply, but I saw this thresd and wanted to add a little to it since I spent some time several years ago trying to understand the TIA schematics and constructing an Excel spreadsheet to simulate the circuitry. Those asterisks are-- I believe-- the places where you have a signal feeding backward (or looping back) into a circuit, such that the state of the whatsit will be indeterminate or random when you first start feeding power into the TIA. That might be incorrect, but the reason I believe that that's what it means is because you see those asterisks throughout the schematics and they always occur at places where a signal from further down the line is being brought back to form a loop. When I was making my spreadsheet, each row represented a moment in time, and each column represented a transistor or gate or other thingy, beginning with the oscillator signal coming into the TIA, such that the value of cell A1 basically determined the values of the other cells on that row according to the logic associated with each column-- that is, the signal flow was A1, B1, C1, D1, E1, etc. My plan was to have each row start with a different or alternating oscillator value-- that is, A1 might be 0, then A2 would be 1, then A3 would be 0 again, etc. I actually decided to randomize the value of A1 so it could be either 0 or 1 to represent the uncertainty of what state A1 would be in at the moment the TIA powered up and started receiving the oscillator's signal. Most of the flow is in one direction, so it was easy to construct the logic for each cell. But as soon as I reached a point where a signal is being looped back I was faced with a quandry. How could I construct the logic for, say, cell H1 if one of its inputs is a line coming back from, say, cell P1, given that I haven't even coded the logic for cell P1 yet and determined its value? What I ended up doing was add a cell in front of H1 to represent the value of P1 from further along the signal flow, but actually coming from the row above, since its state had to have been determined during a prior moment in time-- that is, G2 was equal to P1, then H2 would be determined by whatever logic represented that particular circuit, then eventually I could determine what P2 was equal to and feed its value back into G3. The problem is, what to do about the very first row or moment in time? I decided to code the logic for G1 such that if we were on row 1 then we would randomly set G1 to a 0 or 1 given that we had no idea yet what its actual state should be, but if we were on any row other than row 1 then the value of G2 would be pulled from cell P1. As a result, I ended up with two rows for each state of the oscillator-- that is, rows 1 and 2 might begin with a value of 1, then rows 3 and 4 would be where the oscillator changed to a value of 0, etc. Thus, the odd-numbered rows represented the initial signal flow when the oscillator changes from 0 to 1 or vice versa, and the even-numbered rows represented the signal flow after any looped-back signals had changed their states and everything had settled down. When I did that, I noticed that everywhere I was randomizing the initial state of a signal because it was being looped back from further along the path and its value wasn't known yet, that was where those asterisks appeared in the schematics. I might be wrong about what they mean, but I genuinely believe they were put there to call attention to the places where random high/low states occur at the initial moment of powering up the TIA and the oscillator.
  5. I understand what you're saying, but I still think that a computer program-- and an assembler is a computer program-- can be programmed to be "intelligent" enough to recognize that "$0013" is a 4-"digit" hex address, and as such it should be able to interpret it as a 2-byte address even though the high byte is 0. Now, if the person using the assembler prefers it to default to one behavior or another-- such as automatically assuming that "$0013" is to be treated as a 2-byte address, or automatically simplifying it to a 1-byte zero-page address-- then a command line switch or .INI file parameter or whatever could be added for that purpose. Having to append extra characters to an opcode seems like a kludge. I'm familiar with that method, even if I haven't written or assembled any 6502 code in several years now, and I understand that it can be useful for forcing the assembler to use a particular address mode-- but in this particular case it seems like it shouldn't be necessary in the first place, because you aren't using it to force the assembler to use a different address mode than otherwise written, you're using it to force the assembler to use the address mode as written.
  6. My personal feeling is that that's a problem with the assembler, because to my mind it should treat a 4-character hex address as absolute, even if the high byte is $00.
  7. I didn't bother to look up the correct address for RESM1, so I might have given the wrong one. But what I meant was that if the zero-page address were $nn, then the absolute address would be written in assembly as $00nn, not $nn00-- even though it would be expressed in Little Endian order in the actual machine code.
  8. I remember that I made and posted a chart showing when you could write to the PF registers, breaking it down to each playfield pixel (based on some calculations), and I think you (@Thomas Jentzsch) gave me feedback about timing variations for certain models/clones, which I added to the chart. I think I posted two versions, one showing the calculations and another that just gave the results. It should be here in one of the 2600 programming subforums. But I'm a little off-topic, since this thread is actually about HMOVE.
  9. As far as I remember, there are indeed some variations between specific models and clones of the VCS 2600 with regard to things like the timing of HMOVEs and the windows for when it's safe to write to the playfield registers. You might be able to find more information about the differences by searching the 2600 programming subforums for something like "2600jr HMOVE," "2600jr starfield effect," "2600jr playfield pixels," etc.
  10. If that's supposed to be the absolute version of a zero-page address, shouldn't it be $0013? Even though 6502 assembly is Little Endian, that doesn't change the way you write an absolute address.
  11. … he said while posting on a forum devoted to old gaming consoles. Yes, I admit it-- I'm late to the party. And if ESO offers "basically the exact same experience" as "Pretty much every MMO," it's still new to me, since I've never played any of them. I also never watched Seinfeld or Friends until long after they ended and were living on in Syndication Perpetuity. But getting back to ESO, it seems pretty absorbing and fun to me. I'm more into playing it solo than trying to make a bunch of friends to chat with, or group up with some random strangers to tackle a dungeon or boss that's deliberately designed to be too much for a single player to handle. On the other hand, I've occasionally helped other players out by crafting them some gear, or by teaming up to tackle a boss for a daily quest. It's just that I'd rather not have to pay attention to the chat window when I'd rather be exploring some location I haven't been to before, or seeing if I've improved enough to successfully beat a group dungeon or group boss on my own. And there are definitely some things that I don't care for, such as the obsession that all of the other players seem to have with leveling up as quickly as possible, or with creating the "best build," or charging other players outrageous prices for items which the game designers have deliberately programmed to be super-rare, etc. So it isn't all peaches and cream. But all in all, I'm having a lot of fun with it.
  12. In a way, TES games are designed such that you can't "finish" them per se, given that after you complete the main quest and all side/guild/other quests, you can continue to (re)explore dungeons and caves, or even pursue a life of sorts within the game's environment, such as collecting alchemy ingredients to sell to merchants, or crafting potions or other things to sell to merchants, or just wandering around town chatting with the locals even though they always talk about the same things. I kept telling myself for years that I'd never buy ESO, because I had a very dim view of online games-- not based on any personal experience, since I'd never played an online game before, ever; but rather based on things I'd read about online gaming from people who played online games. But about 10 months ago I finally bought ESO:Morrowind and all(? or most) DLC available at that time because it was on sale and I was trying to complete my collection of ES games. (I also bought the Skyrim Special Edition at the same time so I'd have all available official add-ons for it, too.) And I've basically been playing ESO ever since, to the exclusion of just about everything else. I primarily play ESO as though it were an offline game, meaning I try to solo as much of it as I can. But I also interact with other players from time to time, such as to craft them a weapon or set of armor, or when I need to join a group to go after a group boss or clear a group dungeon-- except that I've gotten to where I can tackle many of the group bosses by myself, and the other night I cleared a group dungeon (Volenfell) by myself (in normal mode). Anyway, I've really been enjoying the heck out of ESO. Getting back to the topic at hand, I saw a YouTube video from a guy who analyzed that brief little trailer for TES6 and concluded that it's going to be set in Hammerfell-- or at least that the mountains, coastline, and cities we see in the trailer are in Hammerfell.
  13. Semitones_From_A4 = 12 * LOG ( Frequency / 440, 2 ) Cents_From_Note = 100 * ( Semitones_From_A4 - ROUND ( Semitones_From_A4, 0 ) ) Edit: I'm working on posting a spreadsheet I made that simulates how the TIA generates its audio signal as shown in the schematics. I'm not going to explain it in detail, but I'll post a copy of the schematics that shows how the columns of the spreadsheet relate to the symbols on the schematics, along with some general comments about how to "use" the spreadsheet. I'll also post some other stuff related to TIA audio. This will likely be done in a series of posts, rather than all at once, and I'm going to be extremely busy with family for the next week, so it might take a little while.
  14. It's been a while since I've looked at this subject, so I hope I've gotten all of the details correct! You can use the following formula to calculate the frequency of a particular note generated by the TIA: NoteFrequency = OscillatorRate / ColorClocks / WaveformLength / NoteValue OscillatorRate is the frequency of the crystal oscillator, which determines the number of color clocks (a.k.a. pixel clocks) per second. This depends on the type of 2600: NTSC 2600s: OscillatorRate = 3579575 Hz PAL or SECAM 2600s: OscillatorRate = 3546894 Hz ColorClocks is the number of color clocks per audio clock. The TIA generates 2 audio clocks per scan line, and there are 228 color clocks per scan line, so there are 114 color clocks per audio clock (228 / 2). This is really an average value, because one audio clock is 112 color clocks long and the other is 116 color clocks long, due to the way they're generated by the HSC (Horizontal Sync Counter). The horizontal clocks that drive the HSC are 4 color clocks long, so there are 57 horizontal clocks per scan line (228 / 4). We can't evenly divide 57 by 2-- the closest we can get to an even split is 28 and 29-- but it's just a coincidence that it worked out that well for the audio clocks. The HSC generates a number of events (or signals) related to the scan line and the playfield, and some of these are used to trigger the phases of the audio clocks. As luck would have it, two such events are separated by 28 horizontal clocks (or 112 color clocks, 28 * 4) in one direction and 29 horizontal clocks (or 116 color clocks, 29 * 4) in the other direction. In any case, we say that there are (on average) 114 color clocks per audio clock. WaveformLength is the length of the waveform generated by the value written to the AUDC0 or AUDC1 register. (We'll ignore audio channel 1 for the rest of this discussion and focus on audio channel 0.) There are 16 possible values for AUDC0 (0 through 15), but some generate waveforms which are duplicates of other AUDC0 values, so there are really only 11 unique waveforms. The lengths of the waveforms are given as the number of samples in one complete cycle, which are as follows (sorted in ascending order by length): AUDC0 = 0 or 11: waveform length = 1 ("always on") AUDC0 = 4 or 5: waveform length = 2 AUDC0 = 12 or 13: waveform length = 6 AUDC0 = 1: waveform length = 15 AUDC0 = 6 or 10: waveform length = 31 AUDC0 = 7 or 9: waveform length = 31 (different than AUDC0 = 6) AUDC0 = 14: waveform length = 93 AUDC0 = 15: waveform length = 93 (different than AUDC0 = 14) AUDC0 = 2: waveform length = 465 AUDC0 = 3: waveform length = 465 (different than AUDC0 = 2) AUDC0 = 8: waveform length = 511 NoteValue is the value written to the AUDF0 register, plus 1. There are 32 possible values (0 through 31), but we need to add 1 to them for our formula. To get a little bit technical again, the TIA uses tone clocks to control the rate at which a waveform is played. Each tone clock is generated by the AFD (Audio Frequency Divider) by suppressing some number of audio clock signals. The value of AUDF0 specifies how many sets of phase 1 and phase 2 audio clock signals to suppress. If AUDF0 = 0, no signals are suppressed, so 1 tone clock = 1 audio clock; if AUDF0 = 1, one set of signals is suppressed, so 1 tone clock = 2 audio clocks; if AUDF0 = 2, two sets are suppressed, so 1 tone clock = 3 audio clocks; etc. This has the effect of stretching the waveform to a multiple of its normal length, thereby dividing the waveform's normal or "base" frequency and producing a lower pitch. So if you set AUDF0 to 25 on the NTSC 2600, the various values of AUDC0 should produce the following frequencies (rounded to 5 decimal places): AUDC0 = 0 or 11: NoteFrequency = 3579575 / 114 / 1 / 26 = 1207.68387 Hz (but the samples are "always on" so you get no sound) AUDC0 = 4 or 5: NoteFrequency = 3579575 / 114 / 2 / 26 = 603.84194 Hz AUDC0 = 12 or 13: NoteFrequency = 3579575 / 114 / 6 / 26 = 201.28065 Hz AUDC0 = 1: NoteFrequency = 3579575 / 114 / 15 / 26 = 80.51226 Hz AUDC0 = 6 or 10: NoteFrequency = 3579575 / 114 / 31 / 26 = 38.95754 Hz AUDC0 = 7 or 9: NoteFrequency = 3579575 / 114 / 31 / 26 = 38.95754 Hz AUDC0 = 14: NoteFrequency = 3579575 / 114 / 93 / 26 = 12.98585 Hz AUDC0 = 15: NoteFrequency = 3579575 / 114 / 93 / 26 = 12.98585 Hz AUDC0 = 2: NoteFrequency = 3579575 / 114 / 465 / 26 = 2.59717 Hz AUDC0 = 3: NoteFrequency = 3579575 / 114 / 465 / 26 = 2.59717 Hz AUDC0 = 8: NoteFrequency = 3579575 / 114 / 511 / 26 = 2.36337 Hz To make it easier for you to compare these with the values in your second post, here they are in ascending AUDC0 order: AUDC0 = 0: NoteFrequency = 1207.68387 Hz (but it's actually silent) AUDC0 = 1: NoteFrequency = 80.51226 Hz AUDC0 = 2: NoteFrequency = 2.59717 Hz AUDC0 = 3: NoteFrequency = 2.59717 Hz AUDC0 = 4: NoteFrequency = 603.84194 Hz AUDC0 = 5: NoteFrequency = 603.84194 Hz AUDC0 = 6: NoteFrequency = 38.95754 Hz AUDC0 = 7: NoteFrequency = 38.95754 Hz AUDC0 = 8: NoteFrequency = 2.36337 Hz AUDC0 = 9: NoteFrequency = 38.95754 Hz AUDC0 = 10: NoteFrequency = 38.95754 Hz AUDC0 = 11: NoteFrequency = 1207.68387 Hz (but it's actually silent) AUDC0 = 12: NoteFrequency = 201.28065 Hz AUDC0 = 13: NoteFrequency = 201.28065 Hz AUDC0 = 14: NoteFrequency = 12.98585 Hz AUDC0 = 15: NoteFrequency = 12.98585 Hz Some of these are sort of close to the values you got, but some are way off, which I'll explain below. As for the ones that are sort of close, I'm not sure I understand what you did-- did you sample the actual output of the NTSC 2600, or did you write a program to generate your own samples? In any case, 31400 Hz is just an approximation of 3579575 / 114, so any calculations that use 31400 Hz are going to be a little off. But what about the values that are way off? It turns out that the frequencies calculated with our formula don't necessarily reflect what you actually hear when the notes are played. This is due to each waveform's pattern. If the waveform has a single high phase and a single low phase, we say that it produces a "pure tone." This isn't entirely true, because the waveforms aren't sine waves and often their high and low phases aren't equal in length; but for the most part any harmonic overtones are quiet enough that these "pure tone" waveforms do sound a lot like sine waves. However, if a given waveform has more than one set of high and low phases it will produce noticeable harmonic overtones, hence what we hear will generally sound like some multiple of the calculated frequency. I'll go into this later in another post.
  15. I certainly agree that the terms "vertical front porch," "vertical back porch," "horizontal front porch," and "horizontal back porch" are awkward to say and write, but they aren't my inventions-- they were coined by television engineers before I was ever born and have been in use for many decades. I've never seen the terms "top vertical blank" and "bottom vertical blank" used until your post-- or for that matter "left horizontal blank" and "right horizontal blank." They might be correct in a descriptive sense, but I wouldn't call them "technically correct" because that implies they're correct according to technical definitions. Anyway, the phrase "technically correct" is often used before giving a dissenting view-- e.g., "The terms 'vertical front porch' and 'vertical back porch' are technically correct, but they're awkward to use and their meanings aren't intuitively obvious, so I prefer to use the terms 'bottom vertical blank' and 'top vertical blank,' respectively, because they're more descriptive and hence more intuitively obvious, in addition to being less awkward-sounding."
×
×
  • Create New...