Jump to content

Bryan

Members
  • Posts

    10,921
  • Joined

  • Last visited

  • Days Won

    11

Posts posted by Bryan

  1. Interesting about the calibration of colour strength in the burst. So the burst strength can be weakened to boost colour.

     

    Do you know how sets calibrate brightness?

    Changing the burst level only works if you change it independently of the rest of the color. Atari used this trick in the 6-switch 2600 and then in the 1200XL. TIA/GTIA only puts out one level of color, but on those models the burst was made to be a lower level than at all other times so the colors would be more saturated. I don't know if any TVs adjust brightness, but some of them might set the black level after sync. That's something I need to investigate further.

  2. My current theory, based on this and the greater color fringing on the text, is that this isn't really an issue with the phase of the artifacted colors. Rather, I suspect that the 800 is putting out a weaker color signal that is causing the artifacting colors to be very oversaturated compared to the XL/XE models, which would cause hue shifts due to output limitations in the display and differences in the way the decoded color values are clamped. I can get the 800XL to show a somewhat similar effect on the TV if I crank color all the way to max and then tune the tint to match the 800's phase offset.

     

    It's very true that the amplitude of the burst will determine how much artifacting you see on high-rez text. Early sets only calibrated color phase off the burst, but modern sets (starting in the late '70s probably) calibrate color strength as well. A weak burst means the TV will assume all color is weak and it will bump up the "COLOR" control leading to stronger artifacts. This is one of the tough things to set when designing something like the UAV. If the color is strong, then the composite picture gets clearer but faint jailbars can start to appear on some TVs that don't filter Chroma out of the picture very well.

    • Like 2
  3. To make things 100% clear, here's a demonstration of how it occurs.

     

    I ran the following program:

    10 GRAPHICS 8+16:COLOR 1 (high rez)

    20 POKE 712,148 (blue border)

    30 POKE 710,4 (gray screen = no color)

    40 FOR X=1 TO 319 STEP 2 (skip every other X)

    50 PLOT X,0:DRAWTO X,191 (vertical lines)

    60 NEXT X

    70 GOTO 70 (halt with screen intact)

     

    You can see the output in S-Video and in Composite. Instead of lines we now see a solid red screen.

     

    Now look at the scope images of a single line from the middle of the screen. First is the Luminance channel. Once we get to the playfield area, we see the alternating lighter and darker pixels that make up the lines. I also took a zoomed in image to show the pattern more clearly.

     

    Next is the Chrominance channel. We see that there is the off-screen colorburst, followed by the widely spaced color information for the blue borders. The playfield is gray, and thus has no color signal.

     

    Now look at the 2 Composite (mixed) channel pictures. The Luminance and Chrominance have been combined and now there's a waveform across the entire width of the screen. Because the Atari generates pixels with the same clock as it generates color, the frequency is the same and it is now impossible to separate what came from Luminance and what came from Chrominance. This signal looks just like a colored playfield signal.

     

    The C64 uses about a 12% faster pixel clock, so vertical line patterns won't trick the TV like this (also, the faster clock makes its playfield a little narrower).

    post-3606-0-27326300-1519420820_thumb.jpg

    post-3606-0-17463200-1519420827_thumb.jpg

    post-3606-0-02288600-1519420832_thumb.jpg

    post-3606-0-15868100-1519420836.jpg

    post-3606-0-59996900-1519420841.jpg

    post-3606-0-25606800-1519420846.jpg

    post-3606-0-19080700-1519420850.jpg

    post-3606-0-13998800-1519420856_thumb.jpg

    • Like 7
  4. I'm confused why modern high-def flatscreens show artifacting colors at all? Do they have some kind of CRT-emulation built in or something?

     

    Because anything that decodes NTSC video from a Composite source will do it. The signal contains frequencies that must be decoded as color.

     

     

    ok but then why did the higher-res composite monitors BITD show the actual bitmap patterns and not the artifact color?

     

    I guess I don't understand it as much as I thought. It was always explained to me that artifacting was a result of limited horizontal resolution of the screen.

     

    They only failed to display color if they were using separate Chroma/Luma mode. Hopefully my post above helps.

  5. Okay, I'll go back and read this after I post the lowdown on artifacting.

     

    1. It has nothing to do with the display output device (the CRT or the LCD panel). The artifact colors are in the signal before it reaches the tube. The alignment of the image to the phosphors is completely arbitrary (and will actually vary quite a bit as the TV warms up). The size of the phosphors relative to the resolution of the video source is different from set to set. Phosphor triads are NOT related to pixels. The shadow mask blocks the guns from hitting anything except their designated phosphors, but this is done after the continuous analog image has left the guns giving us sort of a Venetian blind view of each of the the 3 individual signals. If R,G and B were run to 3 discrete B&W tubes, you'd still see the artifact colors happening.**

     

    2. Artifacting is generated in the video decoding circuits. The NTSC color carrier is centered around 3.58MHz. The A8's pixel clock is based on this same frequency, so drawing on-off-on pixels in 320 mode actually puts a 3.58MHz waveform in the picture. The TV sees this and screams,"COLOR!!!1!omg". Actually, the TV only screams 2 colors. One if the pattern is 1010 and another if it's 0101. The colors produced depend on what the phase relationship is between the on-off waveform and the colorburst wave extrapolated out to the same point in time (which the TV does via a PLL). Artifacting is putting a color generating waveform into the picture using luma signals instead of the automatically generated Chroma.

     

    3. S-Video (separated video) won't produce the colors because the monitor knows that color is only contained in the Chroma signal. Those 3.58MHz waves in the Luma don't fool it. With a Composite signal, there's no way for the monitor to know the difference. A 3.58MHz wave in the image = color, no matter what.

     

    4. Different A8's have different video buffer circuits and these introduce differing amounts of skew: the delay relationship between Chroma and Luma. This skew changes the colors the Luma appears to be generating because it skews the timing relationship to the colorburst which is our reference for color.

     

    5. I suspect that the 800's strange colors are the result of a sub-optimal clock circuit causing an odd duty cycle in high resolution. High resolution is created by showing 2 pixels per 3.58MHz clock cycle. One LUM value is shown when the clock is high, and another when it's low. If the clock isn't a true 50/50 waveform, then even and odd pixels will have different widths, and will create artifact patterns that aren't ideal. This probably explains the odd color shifting and why some monitors may interpret them differently. EDIT: I also noticed a reduced adjustment range with the UAV pot on the 800 which points to a clock issue. These are things on my laundry list for Rev E.

     

     

    ** not as color, of course, but in the relative strengths of the images.

    • Like 6
  6. Unfortunately, this problem certainly didn't stop the Jaguar. What good is it to me that it can address 2 MB, if the GPU is fricking four kilobytes ?

    - And it doesn't support stack, so you gotta reserve some space for emulating stack.

    - And every variable you use is 32-bit integer (otherwise you fall down to 68000 performance levels once you start extracting separate bytes from those 32 bits, or if you load from main slow RAM),

    - thus you burn through those 4K almost instantly

    - you fit much less code than on 6502, as you don't have 8-bit instructions like INX/INY/CLC, they're 16-48 bits (2 - 6 Bytes)

    - don't get me started on how the GPU performance gets butchered once you start splitting&blitting code chunks to the cache using the infamous 64-bit blitter mode...

     

    Yeah, I was a Jag developer for a short while. It was an ambitious design slaughtered by the need to keep it all really cheap (and by insufficient hardware testing). Even though you could do some fast things with it, nobody in the game industry has got the time to figure out how to squeeze every last ounce of performance out of a complicated architecture. Especially if the docs are sparse.

     

    No, you don't. Now, it may be more convenient from coding perspective, but you don't really need it.

     

    Well, when you saddle a 6502 with a screen that big you've got two problems:

     

    1. Most of your RAM will be consumed with graphics. Double buffering will consume all of it.

    2. It's slow to update large bitmaps with a slow processor. This is one of the things that hurt the Apple IIgs. It had modes like an ST, but couldn't draw anywhere near as fast.

  7. Only a little bit on topic...

     

    I would love to have a crowdfunding campaign to design an 8-bit system from the ground up. Design the NMOS chips and everything using ONLY technology from around 1980 (if there's not enough money for chips, then workalike FPGA DIP's could be used in the board). It would be fun to deliver boxed computers from an alternate history, but based on the architecture ideas we'd most like to see in a single machine.

     

    Then we'd write a fictional backstory for the machine and its parent company.

    • Like 3
  8. The 1650XLD was rumored to have an Intel chip for DOS compatibility.

     

    Sure, the Amiga chipset could be modified for 8-bit systems but what good is Amiga hardware in a system that can only see 64K without bank switching? A 320x200x16 color screen is 32K just by itself. Once your graphics get big, you need a big address space.

     

    Who knows what Atari would have done if the crash hadn't happened. They sure wasted a lot of money exploring all the possibilities.

    • Like 3
  9. Depending on the common video frequency, and the fact that Copper, Agnus, and Paula could use their own programm counters(RAM Access) , using just a 6502 could have worked. Most games on the Amiga didn't even tickle the peak of what the machine was capable of.. Games like Shadow of the Beast could run at the same speed, either if an 68000 or a 6502 had been used.

    I'm pretty sure the Amiga chipset has 16-bit registers and you'd need some complex scheme to write to them. I also don't know if the chips can be accessed at 6502 speeds. In any case, I don't think that was ever anyone's intention.

  10. That is very easy to state now afterwards.

     

    I tend to disagree. When you've got the world's best selling micro, why would you continue to act like a company trying to break into the market with an incompatible product? It's an incredible gamble. The Plus/4 was considered a bad idea by most of the press at the time as well.

  11. The Ted line should never have been released as it had drifted too far from its goal as a ultra-budget machine and was a misguided attempt to move away from the thriving C64 ecosystem. Whereas the 128 probably shot too high, the Plus/4 shot too low. Personally, I think a revised C64 at the same price but with a few added features would have convinced many owners to upgrade and generated more sales than anything else.

  12. The thing that bugs me about every time this conversation comes up is that if Commodore had pushed that bit further with the TED and gave it decent sprites then the plus/4 would've been the machine with almost all of the advantages that keep coming up on either side and we'd probably be discussing that instead.

     

    I suppose, but it would have been another 6502 computer in 1984, right as 68000 machines were hitting the market. It might have had an impact overseas but I doubt it would have ever been a big seller here.

  13. Well, that's kinda why I offered the Kit, so people could flip things around any way they wanted. For most installations, the top mounted jumpers are much easier to get to and the green terminal is still the highest thing on the board. If you flip both the terminal and the jumpers to the other side, then it should work well mounted on a socketed 4050.

     

    Once I get the 4050 boards in, I'll post pictures of them in use.

    • Like 2
  14.  

    Yep they were definitely heading in the right direction with the 1200XL, if only they hadn't removed what was originally planned to be there (i.e., PBI). It is such a beautiful looking machine :) .

     

     

    Yeah, it was a good and a bad decision. The 1200XL was the baby machine of the new line and yet it was priced at $900 despite being cheaper to build than the 800. I think Atari was dazzled by the prices Apple was getting.

    • Like 1
  15. Because of the 5200 reset issue, I'm making a small 4050 add-on board I'll send out for free to those wanting one. Basically, if the 4050 is socketed (this seems to be mostly 4-ports) then the only way the Plug-In board fits under the shield is to remove the 4050 which causes problems as the 4050 also handles the reset signal. The options are to solder the 4050 in (either by removing the socket or soldering it onto the UAV or the bottom of the motherboard) or building a low-profile version of UAV using the Kit. This LP version could have the terminal and jumpers soldered under the board instead of on top.

     

    If there's ever a new revision (some day, I hope), I'm going to include the 4050 functionality on the UAV itself and make it a little easier to have manufactured.

    • Like 2
  16. Today I'm getting another batch of orders out and getting caught up on PMs. If I juggle too many orders at once, I tend to get confused. :)

     

    Because of the 5200 reset issue, I'm making a small 4050 add-on board I'll send out for free to those wanting one. Basically, if the 4050 is socketed (this seems to be mostly 4-ports) then the only way the Plug-In board fits under the shield is to remove the 4050 which causes problems as the 4050 also handles the reset signal. The options are to solder the 4050 in (either by removing the socket or soldering it onto the bottom of the board) or building a low-profile version of UAV using the Kit. This LP version could have the terminal and jumpers soldered under the board instead of on top.

     

    If there's ever a new revision (some day, I hope), I'm going to include the 4050 functionality on the UAV itself and make it a little easier to have manufactured.

    • Like 2
  17. I get that - but why even make the noise in the 1st place?

    Well, others have pointed out that the sounds are free (that is, no extra code involved since Pokey's using those channels).

     

    But the question you're asking is the aesthetic one. I've debated many times whether it was a good feature or a bad one. In the end, I think it comes down to two things:

     

    1. The purchaser of a 400 or 800 had probably never owned a computer before. The sounds wouldn't be unwelcome because you wouldn't know any different.

     

    2. Atari was marketing these machines as really friendly, including the feature that there's one central expansion interface. Not only does everything hook up to the same place (well, unless you bought one of the non-SIO printers or modems) but you can hear the devices talking to the computer in a way that spotlights the SIO feature and makes you appreciate what's going on.

    • Like 4
×
×
  • Create New...